Science.gov

Sample records for active stereo vision

  1. Robust active stereo vision using Kullback-Leibler divergence.

    PubMed

    Wang, Yongchang; Liu, Kai; Hao, Qi; Wang, Xianwang; Lau, Daniel L; Hassebrook, Laurence G

    2012-03-01

    Active stereo vision is a method of 3D surface scanning involving the projecting and capturing of a series of light patterns where depth is derived from correspondences between the observed and projected patterns. In contrast, passive stereo vision reveals depth through correspondences between textured images from two or more cameras. By employing a projector, active stereo vision systems find correspondences between two or more cameras, without ambiguity, independent of object texture. In this paper, we present a hybrid 3D reconstruction framework that supplements projected pattern correspondence matching with texture information. The proposed scheme consists of using projected pattern data to derive initial correspondences across cameras and then using texture data to eliminate ambiguities. Pattern modulation data are then used to estimate error models from which Kullback-Leibler divergence refinement is applied to reduce misregistration errors. Using only a small number of patterns, the presented approach reduces measurement errors versus traditional structured light and phase matching methodologies while being insensitive to gamma distortion, projector flickering, and secondary reflections. Experimental results demonstrate these advantages in terms of enhanced 3D reconstruction performance in the presence of noise, deterministic distortions, and conditions of texture and depth contrast.

  2. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  3. Stereo vision and strabismus.

    PubMed

    Read, J C A

    2015-02-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements.

  4. Stereo vision and strabismus

    PubMed Central

    Read, J C A

    2015-01-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234

  5. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  6. Three-dimensional data-acquiring system fusing active projection and stereo vision

    NASA Astrophysics Data System (ADS)

    Wu, Jianbo; Zhao, Hong; Tan, Yushan

    2001-09-01

    Combining the active digitizing technique with the passive stereo vision, a novel method is proposed to acquire the 3D data from two 2D images. Based on the principle of stereo vision, and assisting the active dense structure light projecting, the system overcomes the problem of data points matching between two stereo images, which is the most important difficulty occurring in stereo vision. An algorithm based on wavelet transform is proposed here to auto-get the threshold for image segment and extract the grid points. The system described here is mainly applied to digitize the 3D objects in time. Comparing with the general digitizers, it performs the translation from 2D images to 3D data completely, and gets over some shortcomings, such as slow image acquiring and data processing speed, depending on mechanical moving, painting on the object before digitizing, and so on. The system is the same with the non-contact and fast measurement and modeling for the 3D object with freedom surface, and can be employed widely in the fields of Reverse Engineering and CAD/CAM. Experiment proves the efficiency of the new use of shape from stereo vision (SFSV) in engineering.

  7. Neural architectures for stereo vision

    PubMed Central

    2016-01-01

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269604

  8. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'.

  9. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  10. Stereo vision techniques for telescience

    NASA Astrophysics Data System (ADS)

    Hewett, S.

    1990-02-01

    The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.

  11. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  12. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  13. Cooperative and asynchronous stereo vision for dynamic vision sensors

    NASA Astrophysics Data System (ADS)

    Piatkowska, E.; Belbachir, A. N.; Gelautz, M.

    2014-05-01

    Dynamic vision sensors (DVSs) encode visual input as a stream of events generated upon relative light intensity changes in the scene. These sensors have the advantage of allowing simultaneously high temporal resolution (better than 10 µs) and wide dynamic range (>120 dB) at sparse data representation, which is not possible with clocked vision sensors. In this paper, we focus on the task of stereo reconstruction. The spatiotemporal and asynchronous aspects of data provided by the sensor impose a different stereo reconstruction approach from the one applied for synchronous frame-based cameras. We propose to model the event-driven stereo matching by a cooperative network (Marr and Poggio 1976 Science 194 283-7). The history of the recent activity in the scene is stored in the network, which serves as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time, as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well adapted for DVS data and can be successfully used for disparity calculation.

  14. Three-dimensional displays and stereo vision.

    PubMed

    Westheimer, Gerald

    2011-08-07

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes.

  15. The contribution of stereo vision to the control of braking.

    PubMed

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  16. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  17. Road Scene Analysis using Trinocular Stereo Vision

    NASA Astrophysics Data System (ADS)

    Matsushima, Kousuke; Matsuura, Hiroto; Kijima, Yoshitaka; Hu, Zhencheng; Uchimura, Keiichi

    Road scene analysis in 3D driving environment, which aims to detect objects from continuously changing background, is vital for driver assitance system and Adaptive Cruise Control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to the factors such as parking vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution for road scene analysis using a trinocular stereo vision based algorithm. In this algorithm, trinocular stereo vision detects all types of objects in road scene. And “U-V-disparity" concept is employed to analyze 3D geometric feature of road scene. The proposed algorithm has been tested on real road scenes and experimental results verified its efficiency.

  18. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  19. Stereo vision calibration based on GMDH neural network.

    PubMed

    Chen, Bingwen; Wang, Wenwei; Qin, Qianqing

    2012-03-01

    In order to improve the accuracy and stability of stereo vision calibration, a novel stereo vision calibration approach based on the group method of data handling (GMDH) neural network is presented. Three GMDH neural networks are utilized to build a spatial mapping relationship adaptively in individual dimension. In the process of modeling, the Levenberg-Marquardt optimization algorithm is introduced as an interior criterion to train each partial model, and the corrected Akaike's information criterion is introduced as an exterior criterion to evaluate these models. Experiments demonstrate that the proposed approach is stable and able to calibrate three-dimensional (3D) locations more accurately and learn the stereo mapping models adaptively. It is a convenient way to calibrate the stereo vision without specialized knowledge of stereo vision.

  20. Stereo Vision: The Haves and Have-Nots.

    PubMed

    Hess, Robert F; To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R

    2015-06-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible.

  1. Stereo vision enhances the learning of a catching skill.

    PubMed

    Mazyn, Liesbeth I N; Lenoir, Matthieu; Montagne, Gilles; Delaey, Christophe; Savelsbergh, Geert J P

    2007-06-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught over 1,400 tennis balls in a pre-post-retention design. While the Stereo+ group improved from a catching percentage of 18% to 59%, catchers in the Stereo- group did not significantly improve (from 10 to 31%), this progress being indifferent from a control group (N = 9) that did not practice at all. These results indicate that the development and use of of compensatory cues for depth perception in people with weak stereopsis is insufficient to successfully deal with interceptions under high temporal constraints, and that this disadvantage cannot be fully attenuated by specific and intensive training.

  2. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  3. Classification error analysis in stereo vision

    NASA Astrophysics Data System (ADS)

    Gross, Eitan

    2015-07-01

    Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.

  4. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  5. A stereo model based upon mechanisms of human binocular vision

    NASA Technical Reports Server (NTRS)

    Griswold, N. C.; Yeh, C. P.

    1986-01-01

    A model for stereo vision, which is based on the human-binocular vision system, is proposed. Data collected from studies of neurophysiology of the human binocular system are discussed. An algorithm for the implementation of this stereo vision model is derived. The algorithm is tested on computer-generated and real scene images. Examples of a computer-generated image and a grey-level image are presented. It is noted that the proposed method is computationally efficient for depth perception, and the results indicate accuracies that are noise tolerant.

  6. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  7. Stereo Vision-Based Robot Servoing Control for Object Grasping

    NASA Astrophysics Data System (ADS)

    Xiao, Nan-Feng; Todo, Isao

    In this paper, a stereo vision-based robot servoing control approach is presented for object grasping. Firstly, three-dimensional projective reconstruction with two free-standing CCD cameras and homogeneous transformation are used to specify the goal grasping position and orientation of a robot hand. Secondly, a stereo vision-based servoing problem is formulated, and a stereo vision-based servoing control algorithm which is independent of robotic dynamics is proposed. Using this algorithm, a set of velocity reference inputs can be obtained to control the motions and velocities of the robot hand during the visual servoing. Thirdly, the methods for coping with the time delay of image processing and the CCD camera calibration are put forward. Lastly, the effectiveness of the present approach is verified by carrying out several experiments on object grasping using a 6 degrees of freedom robot. Its stability and robustness as well as flexibility are also confirmed by the experimental results.

  8. Passive Night Vision Sensor Comparison for Unmanned Ground Vehicle Stereo Vision Navigation

    NASA Technical Reports Server (NTRS)

    Owens, Ken; Matthies, Larry

    2000-01-01

    One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 micrometer cooled FLIR, 8-12 micrometer cooled FLIR, 8-12 micrometer uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.

  9. Continuous motion using task-directed stereo vision

    NASA Technical Reports Server (NTRS)

    Gat, Erann; Loch, John L.

    1992-01-01

    The performance of autonomous mobile robots performing complex navigation tasks can be dramatically improved by directing expensive sensing and planning in service of the task. The task-direction algorithms can be quite simple. In this paper we describe a simple task-directed vision system which has been implemented on a real outdoor robot which navigates using stereo vision. While the performance of this particular robot was improved by task-directed vision, the performance of task-directed vision in general is influenced in complex ways by many factors. We briefly discuss some of these, and present some initial simulated results.

  10. A Portable Stereo Vision System for Whole Body Surface Imaging.

    PubMed

    Yu, Wurong; Xu, Bugao

    2010-04-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy.

  11. Self-supervised learning in cooperative stereo vision correspondence.

    PubMed

    Decoux, B

    1997-02-01

    This paper presents a neural network model of stereoscopic vision, in which a process of fusion seeks the correspondence between points of stereo inputs. Stereo fusion is obtained after a self-supervised learning phase, so called because the learning rule is a supervised-learning rule in which the supervisory information is autonomously extracted from the visual inputs by the model. This supervisory information arises from a global property of the potential matches between the points. The proposed neural network, which is of the cooperative type, and the learning procedure, are tested with random-dot stereograms (RDS) and feature points extracted from real-world images. Those feature points are extracted by a technique based on the use of sigma-pi units. The matching performance and the generalization ability of the model are quantified. The relationship between what have been learned by the network and the constraints used in previous cooperative models of stereo vision, is discussed.

  12. Stereo-vision: head-centric coding of retinal signals.

    PubMed

    van Ee, Raymond; Erkelens, Casper J

    2010-07-13

    Stereo-vision is generally considered to provide information about depth in a visual scene derived from disparities in the positions of an image on the two eyes; a new study has found evidence that retinal-image coding relative to the head is also important.

  13. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  14. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  15. Problem-oriented stereo vision quality evaluation complex

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Gusamutdinova, N.; Konovalenko, I.; Ershov, E.

    2015-12-01

    We describe an original low cost hardware setting for efficient testing of stereo vision algorithms. The method uses a combination of a special hardware setup and mathematical model and is easy to construct, precise in applications of our interest. For a known scene we derive its analytical representation, called virtual scene. Using a four point correspondence between the scene and virtual one we compute extrinsic camera parameters, and project virtual scene on the image plane, which is the ground truth for depth map. Another result, presented in this paper, is a new depth map quality metric. Its main purpose is to tune stereo algorithms for particular problem, e.g. obstacle avoidance.

  16. ROS-based ground stereo vision detection: implementation and experiments.

    PubMed

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  17. Stereo vision for planetary rovers - Stochastic modeling to near real-time implementation

    NASA Technical Reports Server (NTRS)

    Matthies, Larry

    1991-01-01

    JPL has achieved the first autonomous cross-country robotic traverses to use stereo vision, with all computing onboard the vehicle. This paper describes the stereo vision system, including the underlying statistical model and the details of the implementation. It is argued that the overall approach provides a unifying paradigm for practical domain-independent stereo ranging.

  18. Binocular stereo vision system based on phase matching

    NASA Astrophysics Data System (ADS)

    Liu, Huixian; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Binocular stereo vision is an efficient way for three dimensional (3D) profile measurement and has broad applications. Image acquisition, camera calibration, stereo matching, and 3D reconstruction are four main steps. Among them, stereo matching is the most important step that has a significant impact on the final result. In this paper, a new stereo matching technique is proposed to combine the absolute fringe order and the unwrapped phase of every pixel. Different from traditional phase matching method, sinusoidal fringe in two perpendicular directions are projected. It can be realized through the following three steps. Firstly, colored sinusoidal fringe in both horizontal (red fringe) and vertical (blue fringe) are projected on the object to be measured, and captured by two cameras synchronously. The absolute fringe order and the unwrapped phase of each pixel along the two directions are calculated based on the optimum three-fringe numbers selection method. Then, based on the absolute fringe order of the left and right phase maps, stereo matching method is presented. In this process, the same absolute fringe orders in both horizontal and vertical directions are searched to find the corresponding point. Based on this technique, as many as possible pairs of homologous points between two cameras are found to improve the precision of the measurement result. Finally, a 3D measuring system is set up and the 3D reconstruction results are shown. The experimental results show that the proposed method can meet the requirements of high precision for industrial measurements.

  19. Research on robot navigation vision sensor based on grating projection stereo vision

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  20. Stereo vision based automated grasp planning

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Karl; Huber, Loretta; Silva, Dennis; Grasz, Ema; Cadapan, Loreli

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm.

  1. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  2. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  3. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  4. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  5. Stereo vision for small targets in IR image sequences

    NASA Astrophysics Data System (ADS)

    Jutzi, Boris; Gabler, Richard; Jaeger, Klaus

    2001-11-01

    Surveillance systems against missile attacks require the automatic detection of targets with low false alarm rate (FAR). Infrared Search and Track (IRST) systems offer a passive detection of threats at long ranges. For maximum reaction time and the arrangement of counter measurements, it is necessary to declare the objects as early as possible. For this purpose the detection and tracking algorithms have to deal with point objects. Conventional object features like shape, size and texture are usually unreliable for small objects. More reliable features of point objects are three-dimensional spatial position and velocity. At least two sensors observing the same scene are required for multi-ocular stereo vision. Mainly three steps are relevant for successful stereo image processing. First of all the precise camera calibration (estimating the intrinsic and extrinsic parameters) is necessary to satisfy the demand of high degree of accuracy, especially for long range targets. Secondly the correspondence problem for the detected objects must be solved. Thirdly the three-dimensional location of the potential target has to be determined by projective transformation. For an evaluation a measurement campaign to capture image data was carried out with real targets using two identical IR cameras and additionally synthetic IR image sequences have been generated and processed. In this paper a straightforward solution for stereo analysis based on stationary bin-ocular sensors is presented, the current results are shown suggestions for future work are given.

  6. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  7. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  8. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  9. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  10. Modelling the Environment of an Exploring Vehicle by Means of Stereo Vision

    DTIC Science & Technology

    1980-06-01

    incorrect matches in %ewoo visio data and poor accuracy of distancets from stereo . 7.1 General Deiription Many appoaches are porsibie in. describin the...ENV(RONMENT 0P AN EXPLORING VEHICLE BY MEANS OV STEREO VISION by DoneId B. Gennory 0 T1G EL-ECTE Research sponsored by o’i 04 1980 j3E Advanced...and obstacle avoidance. The techniques operate by using three-dimensional d&ta which they can produce by means of stereo vision from stereo picture

  11. The analysis of measurement accuracy of the parallel binocular stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Huan; Xing, Tingwen; Jia, Xin

    2016-09-01

    Parallel binocular stereo vision system is a special form of binocular vision system. In order to simulate the human eyes observation state, the two cameras used to obtain images of the target scene are placed parallel to each other. This paper built a triangular geometric model, analyzed the structure parameters of parallel binocular stereo vision system and the correlations between them, and discussed the influences of baseline distance B between two cameras, the focal length f, the angle of view ω and other structural parameters on the accuracy of measurement. This paper used Matlab software to test the error function of parallel binocular stereo vision system under different structure parameters, and the simulation results showed the range of structure parameters when errors were small, thereby improved the accuracy of parallel binocular stereo vision system.

  12. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  13. Stereo vision tracking of multiple objects in complex indoor environments.

    PubMed

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  14. The role of stereo vision in visual-vestibular integration.

    PubMed

    Butler, John S; Campos, Jennifer L; Bülthoff, Heinrich H; Smith, Stuart T

    2011-01-01

    Self-motion through an environment stimulates several sensory systems, including the visual system and the vestibular system. Recent work in heading estimation has demonstrated that visual and vestibular cues are typically integrated in a statistically optimal manner, consistent with Maximum Likelihood Estimation predictions. However, there has been some indication that cue integration may be affected by characteristics of the visual stimulus. Therefore, the current experiment evaluated whether presenting optic flow stimuli stereoscopically, or presenting both eyes with the same image (binocularly) affects combined visual-vestibular heading estimates. Participants performed a two-interval forced-choice task in which they were asked which of two presented movements was more rightward. They were presented with either visual cues alone, vestibular cues alone or both cues combined. Measures of reliability were obtained for both binocular and stereoscopic conditions. Group level analyses demonstrated that when stereoscopic information was available there was clear evidence of optimal integration, yet when only binocular information was available weaker evidence of cue integration was observed. Exploratory individual analyses demonstrated that for the stereoscopic condition 90% of participants exhibited optimal integration, whereas for the binocular condition only 60% of participants exhibited results consistent with optimal integration. Overall, these findings suggest that stereo vision may be important for self-motion perception, particularly under combined visual-vestibular conditions.

  15. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    PubMed Central

    Marrón-Romera, Marta; García, Juan C.; Sotelo, Miguel A.; Pizarro, Daniel; Mazo, Manuel; Cañas, José M.; Losada, Cristina; Marcos, Álvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. PMID:22163385

  16. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  17. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  18. A quantitative evaluation of confidence measures for stereo vision.

    PubMed

    Hu, Xiaoyan; Mordohai, Philippos

    2012-11-01

    We present an extensive evaluation of 17 confidence measures for stereo matching that compares the most widely used measures as well as several novel techniques proposed here. We begin by categorizing these methods according to which aspects of stereo cost estimation they take into account and then assess their strengths and weaknesses. The evaluation is conducted using a winner-take-all framework on binocular and multibaseline datasets with ground truth. It measures the capability of each confidence method to rank depth estimates according to their likelihood for being correct, to detect occluded pixels, and to generate low-error depth maps by selecting among multiple hypotheses for each pixel. Our work was motivated by the observation that such an evaluation is missing from the rapidly maturing stereo literature and that our findings would be helpful to researchers in binocular and multiview stereo.

  19. Application of Stereo Vision to the Reconnection Scaling Experiment

    SciTech Connect

    Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.; Intrator, Thomas P.; Weber, Thomas

    2012-08-14

    The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, we will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.

  20. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  1. An assembly system based on industrial robot with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  2. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  3. Three-dimensional gauging with stereo computer vision

    NASA Astrophysics Data System (ADS)

    Wong, Kam W.; Ke, Ying; Lew, Michael S.; Obaidat, Mohammed T.

    1991-09-01

    Three-dimensional gaging involves the measurement and mapping of 3-D surfaces. Gaging accuracy depends on measurement accuracy in the images, image scale, and stereo geometry. Multiple cameras are often needed to provide adequate stereoscopic coverage of the object. This paper reports on an automatic 3-D gaging system that is being developed at the University of Illinois at Urbana-Champaign. A portable 3-D target field, consisting of 198 targets each identified with a bar code, is used to determine the interior and exterior orientation parameters of each camera. Image-processing algorithms have been developed to identify conjugate image points in stereo pair of images, and the object-space coordinates of these points are computed by stereo intersection. Software has also been developed for analysis and data editing.

  4. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  5. MARVEL: A System for Recognizing World Locations with Stereo Vision

    DTIC Science & Technology

    1990-05-01

    Baxandall 1983]) to plan their daily commutes or vacation excursions. 147 148 CHAPTER 9. LOCATION RECOGNITION AND THE WORFLD MODEL 9.1 Introduction...Inc. 1982. Baxandall , L. World Guide to Nude Beaches and Recreation. New York: Harmony Books. 1983. Binford, T. 0. Survey of stereo mapping systems

  6. The Effects of Avatars, Stereo Vision and Display Size on Reaching and Motion Reproduction.

    PubMed

    Camporesi, Carlo; Kallmann, Marcelo

    2016-05-01

    Thanks to recent advances on motion capture devices and stereoscopic consumer displays, animated virtual characters can now realistically interact with users in a variety of applications. We investigate in this paper the effect of avatars, stereo vision and display size on task execution in immersive virtual environments. We report results obtained with three experiments in varied configurations that are commonly used in rehabilitation applications. The first experiment analyzes the accuracy of reaching tasks under different system configurations: with and without an avatar, with and without stereo vision, and employing a 2D desktop monitor versus a large multi-tile visualization display. The second experiment analyzes the use of avatars and user-perspective stereo vision on the ability to perceive and subsequently reproduce motions demonstrated by an autonomous virtual character. The third experiment evaluates the overall user experience with a complete immersive user interface for motion modeling by direct demonstration. Our experiments expose and quantify the benefits of using stereo vision and avatars, and show that the use of avatars improve the quality of produced motions and the resemblance of replicated msotions; however, direct interaction in user-perspective leads to tasks executed in less time and to targets more accurately reached. These and additional tradeoffs are important for the effective design of avatar-based training systems.

  7. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  9. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  10. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  11. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  12. Stereo vision in spatial-light-modulator-based microscopy.

    PubMed

    Hasler, Malte; Haist, Tobias; Osten, Wolfgang

    2012-06-15

    We propose a technique for realizing stereoscopic microscopy. We employ a spatial-light-modulator-based microscope to record two images under different angles in one shot. We additionally investigate the possibilities of dynamic aberration correction. It is found that aberration correction is unavoidable because of the employed commercial liquid crystal on a silicon modulator. Also, imaging of phase objects and highly reflective specimens is experimentally investigated. For some of the specimens, an inversion of the recorded intensity is observed, which leads to problems when viewing the stereo pairs. We explain the origin of this effect and show that a reasonable visualization of microscopic three-dimensional objects can be achieved by simple image inversion.

  13. Artificial-vision stereo system as a source of visual information for preventing the collision of vehicles

    SciTech Connect

    Machtovoi, I.A.

    1994-10-01

    This paper explains the principle of automatically determining the position of extended and point objects in 2-D space of recognizing them by means of an artificial-vision stereo system from the measured coordinates of conjugate points in stereo pairs, and also analyzes methods of identifying these points.

  14. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'.

  15. A novel registration method for image-guided neurosurgery system based on stereo vision.

    PubMed

    An, Yong; Wang, Manning; Song, Zhijian

    2015-01-01

    This study presents a novel spatial registration method of Image-guided neurosurgery system (IGNS) based on stereo-vision. Images of the patient's head are captured by a video camera, which is calibrated and tracked by an optical tracking system. Then, a set of sparse facial data points are reconstructed from them by stereo vision in the patient space. Surface matching method is utilized to register the reconstructed sparse points and the facial surface reconstructed from preoperative images of the patient. Simulation experiments verified the feasibility of the proposed method. The proposed method it is a new low-cost and easy-to-use spatial registration method for IGNS, with good prospects for clinical application.

  16. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  17. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  18. Novel method of calibration with restrictive constraints for stereo-vision system

    NASA Astrophysics Data System (ADS)

    Cui, Jiashan; Huo, Ju; Yang, Ming

    2016-05-01

    Regarding the calibration of a stereo vision measurement system, this paper puts forward a new bundle adjustment algorithm based on the stereo vision camera calibration method. Multiple-view geometric constraints and a bundle adjustment algorithm are used to optimize the inner and outer parameters of the camera accurately. A fixed relative constraint relationship between cameras is introduced. We have improved the normal equation construction process of the traditional bundle adjustment method, so that each iteration process occurs just outside the parameters of two images that are taken by a camera that has been optimized to better integrate two cameras bound together as one camera. The relationship between the fixed relative constraints can effectively increase the number of superfluous observations of the adjustment system and optimize higher accuracy while reducing the dimension of the normal matrix; it means that each iteration will reduce the time required. Simulation and actual experimental results show the superior performance of the proposed approach in terms of robustness and accuracy, and our approach also can be extended to stereo-vision system with more than two cameras.

  19. High-accuracy three-dimensional reconstruction of vibration based on stereo vision

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Zhang, Peng; Deng, Huaxia; Wang, Jun

    2016-09-01

    The traditional vibration measurement method usually uses contact sensors, which induce unwanted mass and have limits for moving parts. Currently, using stereo vision for vibration measurement is developing fast because of noncontact and full field. However, for stereo vision, the high-accuracy reconstruction for the vibration motion is a significant challenge and the factors that affect the construction accuracy have not been thoroughly studied. The accuracy analysis of sinusoidal motion reconstruction is important because it can provide guidance for the reconstruction of other vibration motions. The high accuracy reconstruction for sinusoidal motion and the factors that affect the accuracy of the reconstruction are presented. First, the error model of reconstruction using stereo vision considering the delay time, frequency, amplitude, and disparity is established. The accuracy of sinusoidal motion reconstruction in the whole period is analyzed theoretically and experimentally considering subpixel interpolation. Peak identification is essential for the sinusoidal motion reconstruction and has the highest resolution requirement for the reconstruction resolution. This requirement is systematically investigated for the sinusoidal motions with different frequencies and amplitudes. The relationships between the reconstruction resolution and parameters of cameras are analyzed.

  20. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    SciTech Connect

    Wang, J.R.; Davis, T.A.; Lee, G.K.

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  1. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  2. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  3. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  4. Plant phenotyping using multi-view stereo vision with structured lights

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuy Tuong; Slaughter, David C.; Maloof, Julin N.; Sinha, Neelima

    2016-05-01

    A multi-view stereo vision system for true 3D reconstruction, modeling and phenotyping of plants was created that successfully resolves many of the shortcomings of traditional camera-based 3D plant phenotyping systems. This novel system incorporates several features including: computer algorithms, including camera calibration, excessive-green based plant segmentation, semi-global stereo block matching, disparity bilateral filtering, 3D point cloud processing, and 3D feature extraction, and hardware consisting of a hemispherical superstructure designed to hold five stereo pairs of cameras and a custom designed structured light pattern illumination system. This system is nondestructive and can extract 3D features of whole plants modeled from multiple pairs of stereo images taken at different view angles. The study characterizes the systems phenotyping performance for 3D plant features: plant height, total leaf area, and total leaf shading area. For plants having specified leaf spacing and size, the algorithms used in our system yielded satisfactory experimental results and demonstrated the ability to study plant development where the same plants were repeatedly imaged and phenotyped over the time.

  5. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    PubMed

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions.

  6. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    NASA Astrophysics Data System (ADS)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  7. Stereo-vision-based mixed reality: assistance for teleoperation

    NASA Astrophysics Data System (ADS)

    Maman, Didier; Nashashibi, Fawzi; Fuchs, Philippe; Bordas, Jean Claude

    1998-12-01

    This paper deals with the use of mixed reality as a new assistance and training tool for performign teleoperation tasks in hostile environments. It describes the virtual reality techniques invovled and tackles the proble of scene registration using a man-machine cooperative and multisensory vision system. During a maintenance operation, a telerobotic task needs a perfect knowledge of the remote scne in which the robot operates. Therefore, the system provides the operator with pwoerful sensorial feedbacks as well as appropriate tools to build and update automatically the geometric model of the perceived scene. This local model is the world over which the robot is working. It also serves for mission traiing or planning an dpermits any viewpoint observation. We will describe here a new interactive approach combining image analysis and mixed reality technqiues for assisted 3D geometric semantic modling. We also tackle the problem of pose recovery and object tracking using a stereosocpic system mounted on a robot arm. The proposed model-based approach can be used for both real-time tracking and accurate static fitting of complex parametric curved objects. It therefore constitutes a unified tool for building and maintaining the local geometric model of the remote environment.

  8. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  9. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

  10. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects.

  11. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  12. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  13. On-site calibration method for outdoor binocular stereo vision sensors

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Yin, Yang; Wu, Qun; Li, Xiaojing; Zhang, Guangjun

    2016-11-01

    Using existing calibration methods for binocular stereo vision sensors (BSVS), it is very difficult to extract target characteristic points in outdoor environments under complex light conditions. To solve the problem, an online calibration method for BSVS based a double parallel cylindrical target and a line laser projector is proposed in this paper. The intrinsic parameters of two cameras are calibrated offline. Laser strips on the double parallel cylindrical target are mediated to calibrate the configuration parameters of BSVS. The proposed method only requires images of laser strips on the target and is suitable for the calibration of BSVS in outdoor environments. The effectiveness of the proposed method is validated through physical experiments.

  14. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  15. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  16. Computed tomography as ground truth for stereo vision measurements of skin.

    PubMed

    Vanberlo, Amy M; Campbell, Aaron R; Ellis, Randy E

    2011-01-01

    Although dysesthesia is a common surgical complication, there is no accepted method for quantitatively tracking its progression. To address this, two types of computer vision technologies were tested in a total of four configurations. Surface regions on plastic models of limbs were delineated with colored tape, imaged, and compared with computed tomography scans. The most accurate system used visually projected texture captured by a binocular stereo camera, capable of measuring areas to within 3.4% of the ground-truth areas. This simple, inexpensive technology shows promise for postoperative monitoring of dysesthesia surrounding surgical scars.

  17. On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

    PubMed

    Camuñas-Mesa, Luis A; Serrano-Gotarredona, Teresa; Ieng, Sio H; Benosman, Ryad B; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

  18. Characterization of Stereo Vision Performance for Roving at the Lunar Poles

    NASA Technical Reports Server (NTRS)

    Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry

    2016-01-01

    Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.

  19. Stereo and regioselectivity in ''Activated'' tritium reactions

    SciTech Connect

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the ..cap alpha..-amino C-H position mostly with retention of configuration. Labeling predominated at the single ..beta.. C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the ..cap alpha..-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs.

  20. Application of stereo vision to three-dimensional deformation analyses in fracture experiments

    SciTech Connect

    Luo, P.F. . Dept. of Mechanical Engineering); Chao, Y.J.; Sutton, M.A. . Dept. of Mechanical Engineering)

    1994-03-01

    Based on a pinhole camera model, camera model equations that account for the radial lens distortion are used to map three-dimensional (3-D) world coordinates to two-dimensional (2-D) computer image coordinates. Using two cameras to form a stereo vision, the 3-D information can be obtained. It is demonstrated that such stereo imaging systems can be used to measure the 3-D displacement field around the crack tip of a fracture specimen. To compare with the available 2-D theory of fracture mechanics, the measured displacement fields expressed in the world coordinates are converted, through coordinate transformations, to the displacement fields expressed in specimen crack tip coordinates. By using a smoothing technique, the in-plane displacement components are smoothed and the total strains are obtained. Rigid body motion is eliminated from the smoothed in-plane displacement components and unsmoothed out-of-plane displacement. Compared with the theoretical elastic-plastic field at a crack tip, the results appear to be consistent with expected trends, which indicates that the stereo imaging system is viable tool for the 3-D deformation analysis of fracture specimens.

  1. Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments

    NASA Astrophysics Data System (ADS)

    Griessbach, D. G.; Baumbach, D. B.; Boerner, A. B.; Zuev, S. Z.

    2013-11-01

    Accurate knowledge of position and orientation is a prerequisite for many applications regarding unmanned navigation, mapping, or environmental modelling. GPS-aided inertial navigation is the preferred solution for outdoor applications. Nevertheless a similar solution for navigation tasks in difficult environments with erroneous or no GPS-data is needed. Therefore a stereo vision aided inertial navigation system is presented which is capable of providing real-time local navigation for indoor applications. A method is described to reconstruct the ego motion of a stereo camera system aided by inertial data. This, in turn, is used to constrain the inertial sensor drift. The optical information is derived from natural landmarks, extracted and tracked over consequent stereo image pairs. Using inertial data for feature tracking effectively reduces computational costs and at the same time increases the reliability due to constrained search areas. Mismatched features, e.g. at repetitive structures typical for indoor environments are avoided. An Integrated Positioning System (IPS) was deployed and tested on an indoor navigation task. IPS was evaluated for accuracy, robustness, and repeatability in a common office environment. In combination with a dense disparity map, derived from the navigation cameras, a high density point cloud is generated to show the capability of the navigation algorithm.

  2. Extrinsic parameter calibration of stereo vision sensors using spot laser projector.

    PubMed

    Liu, Zhen; Yin, Yang; Liu, Shaopeng; Chen, Xu

    2016-09-01

    The on-site calibration of stereo vision sensors plays an important role in the measurement field. Image coordinate extraction of feature points of existing targets is difficult under complex light conditions in outdoor environments, such as strong light and backlight. This paper proposes an on-site calibration method for stereo vision sensors based on a spot laser projector for solving the above-mentioned problem. The proposed method is used to mediate the laser spots on the parallel planes for the purpose of calibrating the coordinate transformation matrix between two cameras. The optimal solution of a coordinate transformation matrix is then solved by nonlinear optimization. Simulation experiments and physical experiments are conducted to validate the performance of the proposed method. Under the condition that the field of view is approximately 400  mm×300  mm, the proposed method can reach a calibration accuracy of 0.02 mm. This accuracy value is comparable to that of the method using a planar target.

  3. Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads

    NASA Technical Reports Server (NTRS)

    DiPaolo, Daniel

    2003-01-01

    The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.

  4. A novel method of robot location using RFID and stereo vision

    NASA Astrophysics Data System (ADS)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  5. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  6. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  7. Stereo vision-based obstacle avoidance for micro air vehicles using an egocylindrical image space representation

    NASA Astrophysics Data System (ADS)

    Brockers, R.; Fragoso, A.; Matthies, L.

    2016-05-01

    Micro air vehicles which operate autonomously at low altitude in cluttered environments require a method for onboard obstacle avoidance for safe operation. Previous methods deploy either purely reactive approaches, mapping low-level visual features directly to actuator inputs to maneuver the vehicle around the obstacle, or deliberative methods that use on-board 3-D sensors to create a 3-D, voxel-based world model, which is then used to generate collision free 3-D trajectories. In this paper, we use forward-looking stereo vision with a large horizontal and vertical field of view and project range from stereo into a novel robot-centered, cylindrical, inverse range map we call an egocylinder. With this implementation we reduce the complexity of our world representation from a 3D map to a 2.5D image-space representation, which supports very efficient motion planning and collision checking, and allows to implement configuration space expansion as an image processing function directly on the egocylinder. Deploying a fast reactive motion planner directly on the configuration space expanded egocylinder image, we demonstrate the effectiveness of this new approach experimentally in an indoor environment.

  8. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  9. SVMT: a MATLAB toolbox for stereo-vision motion tracking of motor reactivity.

    PubMed

    Vousdoukas, M I; Perakakis, P; Idrissi, S; Vila, J

    2012-10-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal functions perform lens distortion correction, camera geometry definition, feature matching, as well as data clustering and filtering to extract 3D motion paths of specific body areas. System validation showed geo-rectification errors below 0.5 mm, while feature matching and motion paths extraction procedures were successfully validated with manual tracking and RMS errors were typically below 2% of the movement range. The application of the system in a psychophysiological experiment designed to elicit a startle motor response by the presentation of intense and unexpected acoustic stimuli, provided reliable data probing dynamical features of motor responses and habituation to repeated stimulus presentations. The stereo-geolocation and motion tracking performance of the SVMT system were successfully validated through comparisons with surface EMG measurements of eyeblink startle, which clearly demonstrate the ability of SVMT to track subtle body movement, such as those induced by the presentation of intense acoustic stimuli. Finally, SVMT provides an efficient solution for the assessment of motor reactivity not only in controlled laboratory settings, but also in more open, ecological environments.

  10. Three-dimensional location of tomato based on binocular stereo vision for tomato harvesting robot

    NASA Astrophysics Data System (ADS)

    Xiang, Rong; Ying, Yibin; Jiang, Huanyu; Peng, Yongshi

    2010-10-01

    Accurate harvesting depends on the order of the accuracy of 3D location for harvesting robot. The precision of location is lower when the distance between fruit and camera is larger than 0.8 m for the method based on binocular stereo vision. This is a big problem. In order to improve the precision of depth measurement for ripe tomato, two stereo matching methods were analyzed comparatively which were centroid-based matching and area-based matching. Their performances in depth measurement were also compared. Experiments showed that the relationship between distance and measurement was linear. Then, models of unitary linear regression (ULR) were used to improve the results of depth measurement. After correction by these models, the depth errors were in a range of -28 mm to 25 mm for centroid-based matching method and -8 mm to 15 mm for area-based matching method at a distance of 0.6 m to 1.15 m. It can be concluded that costs of computation can be decreased with the promise of good precision when the parallax of centroid which is acquired through centroid-based matching method is used to set the range of parallax for area-based matching method.

  11. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  12. Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision

    SciTech Connect

    Ren Zhiguo; Liao Jiarui; Cai Lilong

    2010-04-01

    We present an effective method for the accurate three-dimensional (3D) measurement of small industrial parts under a complicated noisy background, based on stereo vision. To effectively extract the nonlinear features of desired curves of the measured parts in the images, a strategy from coarse to fine extraction is employed, based on a virtual motion control system. By using the multiscale decomposition of gray images and virtual beam chains, the nonlinear features can be accurately extracted. By analyzing the generation of geometric errors, the refined feature points of the desired curves are extracted. Then the 3D structure of the measured parts can be accurately reconstructed and measured with least squares errors. Experimental results show that the presented method can accurately measure industrial parts that are represented by various line segments and curves.

  13. Three-dimensional structure measurement of diamond crowns based on stereo vision.

    PubMed

    Ren, Zhiguo; Cai, Lilong

    2009-11-01

    We present an effective method for reconstructing and measuring the three-dimensional (3D) structures of diamond crowns based on stereo vision. To reach high measurement accuracy, the influences of 3D measurement errors are analyzed in detail. Then, a method to accurately extract the linear features of diamond edges based on virtual motion control is described. Depending on the obtained linear features, the 3D structure of a diamond crown can be reconstructed with least squares error. The validity of the proposed method is verified by experiments. The results show that the proposed method can be used to measure the 3D structures of diamond crowns with satisfactory accuracy and efficiency, and it also can be used to extract linear features and measure other similar artificial objects that can be represented by line segments.

  14. Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision.

    PubMed

    Ren, Zhiguo; Liao, Jiarui; Cai, Lilong

    2010-04-01

    We present an effective method for the accurate three-dimensional (3D) measurement of small industrial parts under a complicated noisy background, based on stereo vision. To effectively extract the nonlinear features of desired curves of the measured parts in the images, a strategy from coarse to fine extraction is employed, based on a virtual motion control system. By using the multiscale decomposition of gray images and virtual beam chains, the nonlinear features can be accurately extracted. By analyzing the generation of geometric errors, the refined feature points of the desired curves are extracted. Then the 3D structure of the measured parts can be accurately reconstructed and measured with least squares errors. Experimental results show that the presented method can accurately measure industrial parts that are represented by various line segments and curves.

  15. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  16. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  17. Occupancy grid mapping in urban environments from a moving on-board stereo-vision system.

    PubMed

    Li, You; Ruichek, Yassine

    2014-06-13

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform "SeTCar" in urban environments.

  18. Occupancy Grid Mapping in Urban Environments from a Moving On-Board Stereo-Vision System

    PubMed Central

    Li, You; Ruichek, Yassine

    2014-01-01

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform “SeTCar” in urban environments. PMID:24932866

  19. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  20. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  1. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  2. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision.

  3. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  4. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  5. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  6. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers

    PubMed Central

    El-Haddad, Mohamed T.; Tao, Yuankai K.

    2015-01-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm2 field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology. PMID:26309764

  7. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers.

    PubMed

    El-Haddad, Mohamed T; Tao, Yuankai K

    2015-08-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm(2) field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology.

  8. Accurate calibration of a stereo-vision system in image-guided radiotherapy

    SciTech Connect

    Liu Dezhi; Li Shidong

    2006-11-15

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system.

  9. ANN implementation of stereo vision using a multi-layer feedback architecture

    SciTech Connect

    Mousavi, M.S.; Schalkoff, R.J.

    1994-08-01

    An Artificial Neural Network (ANN), consisting of three interacting neural modules, is developed for stereo vision. The first module locates sharp intensity changes in each of the images. The edge detection process is basically a bottom-up, one-to-one input-output mapping process with a network structure which is time-invariant. In the second module, a multilayered connectionist network is used to extract the features or primitives for disparity analysis (matching). A similarity measure is defined and computed for each pair of primitive matches and is passed to the third module. The third module solves the difficult correspondence problem by mapping it into a constraint satisfaction problem. Intra- and inter-scanline constraints are used in order to restrict possible feature matches. The inter-scanline constraints are implemented via interconnections of a three-dimensional neural network. The overall process is iterative. At the end of each network iteration, the output of the third constraint satisfaction module feeds back updated information on matching pairs as well as their corresponding location in the left and right images to the input of the second module. This iterative process continues until the output of the third module converges to an stable state. Once the matching process is completed, the disparity can be calculated, and camera calibration parameters can be used to find the three-dimensional location of object points. Results using this computational architecture are shown. 26 refs.

  10. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target.

    PubMed

    Wei, Zhenzhong; Zhao, Kai

    2016-07-12

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm².

  11. Image distortion correction for single-lens stereo vision system employing a biprism

    NASA Astrophysics Data System (ADS)

    Qian, Beibei; Lim, Kah Bin

    2016-07-01

    A single-lens stereo vision system employing a biprism placed in front of the camera will generate unusual distortion in the captured image. Different from the typical image distortions due to lenses, this distortion is mainly induced by the thick biprism and appears to be incompatible with existing lens distortion models. A fully constrained and model-free distortion correction method is proposed. It employs all the projective invariants of a planar checkerboard template as the correction constraints, including straight lines, cross-ratio, and convergence at vanishing point, along with the distortion-free reference point as an additional constraint from the system. The extracted sample points are corrected by minimizing the total cost function formed by all these constraints. With both sets of distorted and corrected points, and the intermediate points interpolated by a local transformation, the correction maps are determined. Thereafter, all the subsequent images could be distortion corrected by the correction maps. This method performs well on the distorted image data captured by the system and shows improvements in accuracy on the camera calibration and depth recovery compared with other correction methods.

  12. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    NASA Astrophysics Data System (ADS)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  13. Adaptive Kinematic Control of a Robotic Venipuncture Device Based on Stereo Vision, Ultrasound, and Force Guidance.

    PubMed

    Balter, Max L; Chen, Alvin I; Maguire, Timothy J; Yarmush, Martin L

    2017-02-01

    Robotic systems have slowly entered the realm of modern medicine; however, outside the operating room, medical robotics has yet to be translated to more routine interventions such as blood sampling or intravenous fluid delivery. In this paper, we present a medical robot that safely and rapidly cannulates peripheral blood vessels-a procedure commonly known as venipuncture. The device uses near-infrared and ultrasound imaging to scan and select suitable injection sites, and a 9-DOF robot to insert the needle into the center of the vessel based on image and force guidance. We first present the system design and visual servoing scheme of the latest generation robot, and then evaluate the performance of the device through workspace simulations and free-space positioning tests. Finally, we perform a series of motion tracking experiments using stereo vision, ultrasound, and force sensing to guide the position and orientation of the needle tip. Positioning experiments indicate sub-millimeter accuracy and repeatability over the operating workspace of the system, while tracking studies demonstrate real-time needle servoing in response to moving targets. Lastly, robotic phantom cannulations demonstrate the use of multiple system states to confirm that the needle has reached the center of the vessel.

  14. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target

    PubMed Central

    Wei, Zhenzhong; Zhao, Kai

    2016-01-01

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm2. PMID:27420063

  15. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  16. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  17. Vision by Man and Machine.

    ERIC Educational Resources Information Center

    Poggio, Tomaso

    1984-01-01

    Studies of stereo vision guide research on how animals see and how computers might accomplish this human activity. Discusses a sequence of algorithms to first extract information from visual images and then to calculate the depths of objects in the three-dimensional world, concentrating on stereopsis (stereo vision). (JN)

  18. Stereo-vision system for finger tracking in breast self-examination

    NASA Astrophysics Data System (ADS)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.

  19. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-12-19

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.

  20. Real and virtual robot head for active vision research

    NASA Astrophysics Data System (ADS)

    Marapane, Suresh B.; Lassiter, Nils T.; Trivedi, Mohan M.

    1992-11-01

    In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to conduct research in animate vision one requires an active image acquisition platform. This platform should possess the capability to change vision geometrical and optical parameters of the sensors under the control of a computer. This has led to the development of several robotic sensory-motor systems with multiple degrees of freedoms (DOF). In this paper we describe the status of on going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. A Graphical Simulation and Animation (GSA) environment is also presented. The objective of building the GSA system is to create an environment to aid the researchers in developing high performance and reliable software and hardware in a most effective manner. The GSA includes a complete kinematic simulation of the R2H system, it''s sensors and it''s workspace. GSA environment is not meant to be a substitute for performing real experiments but is to complement it. Thus, the GSA environment will be an integral part of the total research effort. With the aid of the GSA environment a Depth from Defocus (DFD), Depth from Vergence, and Depth from Stereo modules have been implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing stereo images in the virtual world.

  1. Stereo vision-based depth of field rendering on a mobile device

    NASA Astrophysics Data System (ADS)

    Wang, Qiaosong; Yu, Zhan; Rasmussen, Christopher; Yu, Jingyi

    2014-03-01

    The depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic value. However, capturing and displaying dynamic DoF effect were until recently a quality unique to expensive and bulky movie cameras. A computational approach to generate realistic DoF effects for mobile devices such as tablets is proposed. We first calibrate the rear-facing stereo cameras and rectify the stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next, we generate a synthetic light field by warping the raw color image to nearby viewpoints, according to the corresponding values in the upsampled high-resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only, with no additional hardware or software required. The system has been examined in a variety of environments with satisfactory results, according to the subjective evaluation tests.

  2. Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation

    DTIC Science & Technology

    2012-01-01

    IEEE Transactions on Robotics , vol. 21, no. 4, pp. 588–596, 2005. 49 [11] L. Paz, P. Piniés, J. Tardós, and J...Neira, “Large-scale 6-DOF SLAM with stereo-in-hand,” IEEE Transactions on Robotics , vol. 24, no. 5, pp. 946–957, 2008. [12] J. Sola, A. Monin, and M... IEEE Transactions on Robotics , vol. 24, no. 5, pp. 1066–1077, 2008. [15] J. Civera, O. Grasa, A. Davison, and J. Montiel, “1-point RANSAC for EKF-

  3. Needle guidance using handheld stereo vision and projection for ultrasound-based interventions.

    PubMed

    Stolka, Philipp J; Foroughi, Pezhman; Rendina, Matthew; Weiss, Clifford R; Hager, Gregory D; Boctor, Emad M

    2014-01-01

    With real-time instrument tracking and in-situ guidance projection directly integrated in a handheld ultrasound imaging probe, needle-based interventions such as biopsies become much simpler to perform than with conventionally-navigated systems. Stereo imaging with needle detection can be made sufficiently robust and accurate to serve as primary navigation input. We describe the low-cost, easy-to-use approach used in the Clear Guide ONE generic navigation accessory for ultrasound machines, outline different available guidance methods, and provide accuracy results from phantom trials.

  4. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  5. 3D localization of a labeled target by means of a stereo vision configuration with subvoxel resolution.

    PubMed

    Arias H, Néstor A; Sandoz, Patrick; Meneses, Jaime E; Suarez, Miguel A; Gharbi, Tijani

    2010-11-08

    We present a method for the visual measurement of the 3D position and orientation of a moving target. Three dimensional sensing is based on stereo vision while high resolution results from a pseudo-periodic pattern (PPP) fixed onto the target. The PPP is suited for optimizing image processing that is based on phase computations. We describe experimental setup, image processing and system calibration. Resolutions reported are in the micrometer range for target position (x,y,z) and of 5:3x10(-4) rad: for target orientation (θx,θy,θz). These performances have to be appreciated with respect to the vision system used. The latter makes that every image pixel corresponds to an actual distance of 0:3x0:3 mm2 on the target while the PPP is made of elementary dots of 1 mm with a period of 2 mm. Target tilts as large as π=4 are allowed with respect to the Z axis of the system.

  6. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    NASA Astrophysics Data System (ADS)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  7. Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments

    NASA Astrophysics Data System (ADS)

    Achtelik, Markus; Bachrach, Abraham; He, Ruijie; Prentice, Samuel; Roy, Nicholas

    2009-05-01

    This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter's relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.

  8. Finger tracking for hand-held device interface using profile-matching stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  9. Calibration of a dual-PTZ-camera system for stereo vision based on parallel particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Wang, Huai-Ming; Lee, Shih-Tseng; Wu, Chieh-Tsai; Hsu, Ming-Hsi

    2014-02-01

    This work investigates the calibration of a stereo vision system based on two PTZ (Pan-Tilt-Zoom) cameras. As the accuracy of the system depends not only on intrinsic parameters, but also on the geometric relationships between rotation axes of the cameras, the major concern is the development of an effective and systematic way to obtain these relationships. We derived a complete geometric model of the dual-PTZ-camera system and proposed a calibration procedure for the intrinsic and external parameters of the model. The calibration method is based on Zhang's approach using an augmented checkerboard composed of eight small checkerboards, and is formulated as an optimization problem to be solved by an improved particle swarm optimization (PSO) method. Two Sony EVI-D70 PTZ cameras were used for the experiments. The root-mean-square errors (RMSE) of corner distances in the horizontal and vertical direction are 0.192 mm and 0.115 mm, respectively. The RMSE of overlapped points between the small checkerboards is 1.3958 mm.

  10. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    NASA Astrophysics Data System (ADS)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  11. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  12. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  13. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    measures the three dimensional deformation of the inside of a tire as it rolls over the terrain. The system is designed to work in conjunction with...the inside of a tire as it rolls over the terrain. The system is designed to work in conjunction with the previously developed Vehicle Dynamics Group...tire. The complete T2-CAM system, in conjunction with the VDG wheel force transducer, is mounted on a test vehicle as indicated in Figure 6. Several

  14. Sensor fusion of phase measuring profilometry and stereo vision for three-dimensional inspection of electronic components assembled on printed circuit boards.

    PubMed

    Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il

    2009-07-20

    Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.

  15. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.

  16. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  17. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  18. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  19. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.

  20. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  1. Pattern recognition and active vision in chickens.

    PubMed

    Dawkins, M S; Woodington, A

    2000-02-10

    Recognition of objects or environmental landmarks is problematic because appearance can vary widely depending on illumination, viewing distance, angle of view and so on. Storing a separate image or 'template' for every possible view requires vast numbers to be stored and scanned, has a high probability of recognition error and appears not to be the solution adopted by primates. However, some invertebrate template matching systems can achieve recognition by 'active vision' in which the animal's own behaviour is used to achieve a fit between template and object, for example by repeatedly following a set path. Recognition is thus limited to views from the set path but achieved with a minimal number of templates. Here we report the first evidence of similar active vision in a bird, in the form of locomotion and individually distinct head movements that give the eyes a similar series of views on different occasions. The hens' ability to recognize objects is also found to decrease when their normal paths are altered.

  2. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  3. Leisure Activity Participation of Elderly Individuals with Low Vision.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.

    1988-01-01

    Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…

  4. Deep vision: an in-trawl stereo camera makes a step forward in monitoring the pelagic community.

    PubMed

    Underwood, Melanie J; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics.

  5. Deep Vision: An In-Trawl Stereo Camera Makes a Step Forward in Monitoring the Pelagic Community

    PubMed Central

    Underwood, Melanie J.; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics. PMID:25393121

  6. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold K. P.

    1994-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.

  7. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum.

    PubMed

    Li, Roger W; Tran, Truyet T; Craven, Ashley P; Leung, Tsz-Wing; Chat, Sandy W; Levi, Dennis M

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential 'cross-talk' among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes 'beyond-the-plateau'. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations.

  8. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum

    PubMed Central

    Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178

  9. Recent and episodic volcanic and glacial activity on Mars revealed by the High Resolution Stereo Camera.

    PubMed

    Neukum, G; Jaumann, R; Hoffmann, H; Hauber, E; Head, J W; Basilevsky, A T; Ivanov, B A; Werner, S C; van Gasselt, S; Murray, J B; McCord, T

    2004-12-23

    The large-area coverage at a resolution of 10-20 metres per pixel in colour and three dimensions with the High Resolution Stereo Camera Experiment on the European Space Agency Mars Express Mission has made it possible to study the time-stratigraphic relationships of volcanic and glacial structures in unprecedented detail and give insight into the geological evolution of Mars. Here we show that calderas on five major volcanoes on Mars have undergone repeated activation and resurfacing during the last 20 per cent of martian history, with phases of activity as young as two million years, suggesting that the volcanoes are potentially still active today. Glacial deposits at the base of the Olympus Mons escarpment show evidence for repeated phases of activity as recently as about four million years ago. Morphological evidence is found that snow and ice deposition on the Olympus construct at elevations of more than 7,000 metres led to episodes of glacial activity at this height. Even now, water ice protected by an insulating layer of dust may be present at high altitudes on Olympus Mons.

  10. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  11. Field-sequential stereo television

    NASA Technical Reports Server (NTRS)

    Perry, W. E.

    1974-01-01

    System includes viewing devices that provide low interference to normal vision. It provides stereo display observable from broader area. Left and right video cameras are focused on object. Output signals from cameras are time provided by each camera. Multiplexed signal, fed to standard television monitor, displays left and right images of object.

  12. Geometric Variational Methods for Controlled Active Vision

    DTIC Science & Technology

    2006-08-01

    Haker , L. Zhu, and A. Tannenbaum, ``Optimal mass transport for registration and warping’’ Int. Journal Computer Vision, volume 60, 2004, pp. 225-240. G...pp. 119-142. A. Angenent, S. Haker , and A. Tannenbaum, ``Minimizing flows for the Monge-Kantorovich problem,’’ SIAM J. Math. Analysis, volume 35...Shape analysis of structures using spherical wavelets’’ (with S. Haker and D. Nain), Proceeedings of MICCAI, 2005. ``Affine surface evolution for 3D

  13. Stereo images from space

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco

    2008-02-01

    The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D

  14. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  15. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  16. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  17. Global vision systems regulatory and standard setting activities

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo; Münsterer, Thomas

    2016-05-01

    A number of committees globally, and the Regulatory Agencies they support, are active delivering and updating performance standards for vision system: Enhanced, Synthetic and Combined, as they apply to both Fixed Wing and, more recently, Rotorcraft operations in low visibility. We provide an overview of each committee's present and past work, as well as an update of recent activities and future goals.

  18. Citizens' visions on active assisted living.

    PubMed

    Gudowsky, Niklas; Sotoudeh, Mahshid

    2015-01-01

    People aged 65 years and older are the fastest growing section of the population in many countries. Great hopes are projected on technology to support solutions for many of the challenges arising from this trend, thus making our lives more independent, more efficient and safer with a higher quality of life. But, as research and innovation ventures are often closely linked to the market, their focus may lead to biased planning in research and development as well as in policy-making with severe social and economic consequences. Thus the main research question concerned desirable settings of ageing in the future from different perspectives. The participatory foresight study CIVISTI-AAL cross-linked knowledge of lay persons, experts and stakeholders to include a wide variety of perspectives and values into productive long-term planning of research and development. Results include citizens' visions for autonomous living in 2050, implicitly and explicitly containing basic needs towards technological, social and organizational development as well as recommendations for implementation. Conclusions suggest that personalized health and living environments play an important part in the lay persons' view of aging in the future, but only if technologies support social and organizational innovations and yet do not neglect the importance of social affiliation and inclusion.

  19. Using perturbations to identify the brain circuits underlying active vision.

    PubMed

    Wurtz, Robert H

    2015-09-19

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision--the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized.

  20. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold P.; Caplinger, Michael

    1993-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Present techniques, however, focus on one visual cue, such as shading or binocular stereo, and produce results that are either not very accurate in an absolute sense or provide information only at few points on the surface. We plan to integrate shape from shading, binocular stereo and photometric stereo to yield a robust system for recovering detailed surface shape and surface reflectance information. Such a system will be useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface. The work will be carried out on a popular computing platform so that it will be easily accessible to other workers.

  1. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold P.; Caplinger, Michael

    1992-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Present techniques, however, focus on one visual cue, such as shading or binocular stereo, and produce results that are either not very accurate in an absolute sense or provide information only at few points on the surface. We plan to integrate shape from shading, binocular stereo and photometric stereo to yield a robust system for recovering detailed surface shape and surface reflectance information. Such a system will be useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface. The work will be carried out on a popular computing platform so that it will be easily accessible to other workers.

  2. Teacher Activism: Enacting a Vision for Social Justice

    ERIC Educational Resources Information Center

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  3. Robot Vision Library

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  4. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  5. #7 Comparing STEREO, Simulated Helioseismic Images

    NASA Video Gallery

    Farside direct observations from STEREO (left) and simultaneous helioseismic reconstructions (right). Medium to large size active regions clearly appear on the helioseismic images, however the smal...

  6. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  7. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  8. A Robot Vision System.

    DTIC Science & Technology

    1985-12-01

    ix e ...... . . . . . . .. . - . 1 I. Introduction This project includes the design and implementation of a vision - based goal achievement system. The... vision system design base . Final Conclusions Stereo vision is useless beyond about 15 feet for the camera separation of .75 feet, a picture...model. Such monocular vision and modelling, duplicated for two cameras, would give a second source of model data for resolving ambiguities, and

  9. Passive stereo range imaging for semi-autonomous land navigation

    NASA Technical Reports Server (NTRS)

    Matthies, Larry

    1992-01-01

    The paper examines the use of stereo vision (SV) for obstacle detection in semiautonomous land navigation. Feature-based and field-based paradigms for SV are reviewed. The paper presents stochastic models and simple, efficient stereo matching algorithms for the field-based approach and describes a near-real-time vision system using these algorithms. Experimental results illustrate aspects of the stochastic models and lead to the first semiautonomous traversals of natural terrain to use SV for obstacle detection.

  10. Active vision in marmosets: a model system for visual neuroscience.

    PubMed

    Mitchell, Jude F; Reynolds, John H; Miller, Cory T

    2014-01-22

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms.

  11. Stereo visualization in the ground segment tasks of the science space missions

    NASA Astrophysics Data System (ADS)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  12. STEREO-IMPACT Education and Public Outreach: Sharing STEREO Science

    NASA Astrophysics Data System (ADS)

    Craig, N.; Peticolas, L. M.; Mendez, B. J.

    2005-12-01

    The Solar TErrestrial RElations Observatory (STEREO) is scheduled for launch in Spring 2006. STEREO will study the Sun with two spacecrafts in orbit around it and on either side of Earth. The primary science goal is to understand the nature and consequences of Coronal Mass Ejections (CMEs). Despite their importance, scientists don't fully understand the origin and evolution of CMEs, nor their structure or extent in interplanetary space. STEREO's unique 3-D images of the structure of CMEs will enable scientists to determine their fundamental nature and origin. We will discuss the Education and Public Outreach (E/PO) program for the In-situ Measurement of Particles And CME Transients (IMPACT) suite of instruments aboard the two crafts and give examples of upcoming activities, including NASA's Sun-Earth day events, which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. STEREO's connection to space weather lends itself to close partnerships with the Sun-Earth Connection Education Forum (SECEF), The Exploratorium, and UC Berkeley's Center for New Music and Audio Technologies to develop informal science programs for science centers, museum visitors, and the public in general. We will also discuss our teacher workshops locally in California and also at annual conferences such as those of the National Science Teachers Association. Such workshops often focus on magnetism and its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. The importance of partnerships and coordination in working in an instrument E/PO program that is part of a bigger NASA mission with many instrument suites and many PIs will be emphasized. The Education and Outreach Porgram is funded by NASA's SMD.

  13. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  14. Adaptive machine vision. Annual report

    SciTech Connect

    Stoner, W.W.; Brill, M.H.; Bergeron, D.W.

    1988-03-08

    The mission of the Strategic Defense Initiative is to develop defenses against threatening ballistic missiles. There are four distinct phases to the SDI defense; boost, post-boost, midcourse and terminal. In each of these phases, one or more machine-vision functions are required, such as pattern recognition, stereo image fusion, clutter rejection and discrimination. The SDI missions of coarse track, stereo track and discrimination are examined here from the point of view of a machine-vision system.

  15. Object detection with single camera stereo

    NASA Astrophysics Data System (ADS)

    McBride, J.; Snorrason, M.; Eaton, R.; Checka, N.; Reiter, A.; Foil, G.; Stevens, M. R.

    2006-05-01

    Many fielded mobile robot systems have demonstrated the importance of directly estimating the 3D shape of objects in the robot's vicinity. The most mature solutions available today use active laser scanning or stereo camera pairs, but both approaches require specialized and expensive sensors. In prior publications, we have demonstrated the generation of stereo images from a single very low-cost camera using structure from motion (SFM) techniques. In this paper we demonstrate the practical usage of single-camera stereo in real-world mobile robot applications. Stereo imagery tends to produce incomplete 3D shape reconstructions of man-made objects because of smooth/glary regions that defeat stereo matching algorithms. We demonstrate robust object detection despite such incompleteness through matching of simple parameterized geometric models. Results are presented where parked cars are detected, and then recognized via license plate recognition, all in real time by a robot traveling through a parking lot.

  16. Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments

    DTIC Science & Technology

    2015-01-02

    registration technique with an applica- tion to stereo vision. Proceedings of Imaging Understanding Workshop, pages 121–130, 1981. [17] S. P. Meyn, A...forecast activities, and analyze complex scenes with multiple interacting entities. Specific applications include autonomous aerial surveillance...Specific applications include autonomous aerial surveillance systems that cover broad areas of military operations, camera security sys- tems that cover

  17. NASA's STEREO Mission

    NASA Technical Reports Server (NTRS)

    Kucera, T. A.

    2011-01-01

    NASA's STEREO (Solar TErrestrial RElations Observatory) mission consists of two nearly identical spacecraft hosting an array of in situ and imaging instruments for studying the sun and heliosphere. Launched in 2885 and in orbit about the Sun near 1 AU, the spacecraft are now swinging towards the farside of the sun. I will provide the latest information with regards to STEREO space weather data and also recent STEREO research.

  18. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  19. Active vision system integrating fast and slow processes

    NASA Astrophysics Data System (ADS)

    Castrillon-Santana, Modesto; Guerra-Artal, C.; Hernandez-Sosa, J.; Dominguez-Brito, A.; Isern-Gonzalez, J.; Cabrera-Gamez, Jorge; Hernandez-Tejera, F. M.

    1998-10-01

    This paper describes an Active Vision System whose design assumes a distinction between fast or reactive and slow or background processes. Fast processes need to operate in cycles with critical timeouts that may affect system stability. While slow processes, though necessary, do not compromise system stability if its execution is delayed. Based on this simple taxonomy, a control architecture has been proposed and a prototype implemented that is able to track people in real-time with a robotic head while trying to identify the target. In this system, the tracking mobile is considered as the reactive part of the system while person identification is considered a background task. This demonstrator has been developed using a new generation DSP (TMS320C80) as a specialized coprocessor to deal with fast processes, and a commercial robotic head with a dedicated DSP-based motor controller. These subsystems are hosted by a standard Pentium-Pro PC running Windows NT where slow processes are executed. The flexibility achieved in the design phase and the preliminary results obtained so far seem to validate the approach followed to integrate time- critical and slow tasks on a heterogeneous hardware platform.

  20. Acceleration of Stereo Correlation in Verilog

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos

    2006-01-01

    To speed up vision processing in low speed, low power devices, embedding FPGA hardware is becoming an effective way to add processing capability. FPGAs offer the ability to flexibly add parallel and/or deeply pipelined computation to embedded processors without adding significantly to the mass and power requirements of an embedded system. This paper will discuss the JPL stereo vision system, and describe how a portion of that system was accelerated by using custom FPGA hardware to process the computationally intensive portions of JPL stereo. The architecture described takes full advantage of the ability of an FPGA to use many small computation elements in parallel. This resulted in a 16 times speedup in real hardware over using a simple linear processor to compute image correlation and disparity.

  1. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  2. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  3. Stereo Painting Display Devices

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1982-06-01

    The Spanish Surrealist artist Salvador Dali has recently perfected the art of producing two paintings which are stereo pairs. Each painting is separately quite remarkable, presenting a subject with the vivid realism and clarity for which Dali is famous. Due to the surrealistic themes of Dali's art, however, the subjects preser.ted with such naturalism only exist in his imagination. Despite this considerable obstacle to producing stereo art, Dali has managed to paint stereo pairs that display subtle differences of coloring and lighting, in addition to the essential perspective differences. These stereo paintings require a display method that will allow the viewer to experience stereo fusion, but which will not degrade the high quality of the art work. This paper gives a review of several display methods that seem promising in terms of economy, size, adjustability, and image quality.

  4. ARPA Unmanned Ground Vehicle Stereo Vision Program

    DTIC Science & Technology

    1994-03-01

    the-shelf components and makes extensive use of field programmable gate arrays ( FPGAs ) to achieve high performance while maximizing flexibility in...Stanley J. Rosenschein Hans Thomas Matthew Turk Monnett Soldo Teleos Research DuO I i&1 E,. ECTE -f 576 Middlefield Road EC MAR2 91994 w Palo Alto, CA 94301...In the following subsection, we present an approach to thinking about perceptual measurement which makes the claim that simpler is better. The idea

  5. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  6. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  7. Active vision task and postural control in healthy, young adults: Synergy and probably not duality.

    PubMed

    Bonnet, Cédrick T; Baudry, Stéphane

    2016-07-01

    In upright stance, individuals sway continuously and the sway pattern in dual tasks (e.g., a cognitive task performed in upright stance) differs significantly from that observed during the control quiet stance task. The cognitive approach has generated models (limited attentional resources, U-shaped nonlinear interaction) to explain such patterns based on competitive sharing of attentional resources. The objective of the current manuscript was to review these cognitive models in the specific context of visual tasks involving gaze shifts toward precise targets (here called active vision tasks). The selection excluded the effects of early and late stages of life or disease, external perturbations, active vision tasks requiring head and body motions and the combination of two tasks performed together (e.g., a visual task in addition to a computation in one's head). The selection included studies performed by healthy, young adults with control and active - difficult - vision tasks. Over 174 studies found in Pubmed and Mendeley databases, nine were selected. In these studies, young adults exhibited significantly lower amplitude of body displacement (center of pressure and/or body marker) under active vision tasks than under the control task. Furthermore, the more difficult the active vision tasks were, the better the postural control was. This underscores that postural control during active vision tasks may rely on synergistic relations between the postural and visual systems rather than on competitive or dual relations. In contrast, in the control task, there would not be any synergistic or competitive relations.

  8. STEREO Sun360 Teaser

    NASA Video Gallery

    For the past 4 years, the two STEREO spacecraft have been moving away from Earth and gaining a more complete picture of the sun. On Feb. 6, 2011, NASA will reveal the first ever images of the entir...

  9. Stereo Measurements from Satellites

    NASA Technical Reports Server (NTRS)

    Adler, R.

    1982-01-01

    The papers in this presentation include: 1) 'Stereographic Observations from Geosynchronous Satellites: An Important New Tool for the Atmospheric Sciences'; 2) 'Thunderstorm Cloud Top Ascent Rates Determined from Stereoscopic Satellite Observations'; 3) 'Artificial Stereo Presentation of Meteorological Data Fields'.

  10. Holographic optogenetic stimulation of patterned neuronal activity for vision restoration.

    PubMed

    Reutsky-Gefen, Inna; Golan, Lior; Farah, Nairouz; Schejter, Adi; Tsur, Limor; Brosh, Inbar; Shoham, Shy

    2013-01-01

    When natural photoreception is disrupted, as in outer-retinal degenerative diseases, artificial stimulation of surviving nerve cells offers a potential strategy for bypassing compromised neural circuits. Recently, light-sensitive proteins that photosensitize quiescent neurons have generated unprecedented opportunities for optogenetic neuronal control, inspiring early development of optical retinal prostheses. Selectively exciting large neural populations are essential for eliciting meaningful perceptions in the brain. Here we provide the first demonstration of holographic photo-stimulation strategies for bionic vision restoration. In blind retinas, we demonstrate reliable holographically patterned optogenetic stimulation of retinal ganglion cells with millisecond temporal precision and cellular resolution. Holographic excitation strategies could enable flexible control over distributed neuronal circuits, potentially paving the way towards high-acuity vision restoration devices and additional medical and scientific neuro-photonics applications.

  11. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  12. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  13. Vision models for 3D surfaces

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda

    1992-11-01

    Different approaches to computational stereo to represent human stereo vision have been developed over the past two decades. The Marr-Poggio theory of human stereo vision is probably the most widely accepted model of the human stereo vision. However, recently developed motion stereo models which use a sequence of images taken by either a moving camera or a moving object provide an alternative method of achieving multi-resolution matching without the use of Laplacian of Gaussian operators. While using image sequences, the baseline between two camera positions for a image pair is changed for the subsequent image pair so as to achieve different resolution for each image pair. Having different baselines also avoids the inherent occlusion problem in stereo vision models. The advantage of using multi-resolution images acquired by camera positioned at different baselines over those acquired by LOG operators is that one does not have to encounter spurious edges often created by zero-crossings in the LOG operated images. Therefore in designing a computer vision system, a motion stereo model is more appropriate than a stereo vision model. However, in some applications where only a stereo pair of images are available, recovery of 3D surfaces of natural scenes are possible in a computationally efficient manner by using cepstrum matching and regularization techniques. Section 2 of this paper describes a motion stereo model using multi-scale cepstrum matching for the detection of disparity between image pairs in a sequence of images and subsequent recovery of 3D surfaces from depth-map obtained by a non convergent triangulation technique. Section 3 presents a 3D surface recovery technique from a stereo pair using cepstrum matching for disparity detection and cubic B-splines for surface smoothing. Section 4 contains the results of 3D surface recovery using both of the techniques mentioned above. Section 5 discusses the merit of 2D cepstrum matching and cubic B

  14. Hardware-Efficient Bilateral Filtering for Stereo Matching.

    PubMed

    Yang, Qingxiong

    2014-05-01

    This paper presents a new bilateral filtering method specially designed for practical stereo vision systems. Parallel algorithms are preferred in these systems due to the real-time performance requirement. Edge-preserving filters like the bilateral filter have been demonstrated to be very effective for high-quality local stereo matching. A hardware-efficient bilateral filter is thus proposed in this paper. When moved to an NVIDIA GeForce GTX 580 GPU, it can process a one megapixel color image at around 417 frames per second. This filter can be directly used for cost aggregation required in any local stereo matching algorithm. Quantitative evaluation shows that it outperforms all the other local stereo methods both in terms of accuracy and speed on Middlebury benchmark. It ranks 12th out of over 120 methods on Middlebury data sets, and the average runtime (including the matching cost computation, occlusion handling, and post processing) is only 15 milliseconds (67 frames per second).

  15. Digital stereoscopic photography using StereoData Maker

    NASA Astrophysics Data System (ADS)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  16. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  17. Application of Stereo-Imaging Technology to Medical Field

    PubMed Central

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young

    2012-01-01

    Objectives There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. Methods We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Results Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. Conclusions The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices. PMID:23115737

  18. Exploring techniques for vision based human activity recognition: methods, systems, and evaluation.

    PubMed

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-25

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  19. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  20. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity

    PubMed Central

    Frost, William N.; Wang, Jean; Brandon, Christopher J.

    2007-01-01

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations. PMID:17306887

  1. Applications of artificial intelligence 1993: Machine vision and robotics; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    SciTech Connect

    Boyer, K.L.; Stark, L.

    1993-01-01

    Various levels of machine vision and robotics are addressed, including object recognition, image feature extraction, active vision, stereo and matching, range image acquisition and analysis, sensor models, motion and path planning, and software environments. Papers are presented on integration of geometric and nongeometric attributes for fast object recognition, a four-degree-of-freedom robot head for active computer vision, shape reconstruction from shading with perspective projection, fast extraction of planar surfaces from range images, and real-time reconstruction and rendering of three-dimensional occupancy maps.

  2. Categorisation through evidence accumulation in an active vision system

    NASA Astrophysics Data System (ADS)

    Mirolli, Marco; Ferrauto, Tomassino; Nolfi, Stefano

    2010-12-01

    In this paper, we present an artificial vision system that is trained with a genetic algorithm for categorising five different kinds of images (letters) of different sizes. The system, which has a limited field of view, can move its eye so as to explore the images visually. The analysis of the system at the end of the training process indicates that correct categorisation is achieved by (1) exploiting sensory-motor coordination so as to experience stimuli that facilitate discrimination, and (2) integrating perceptual and/or motor information over time through a process of accumulation of partially conflicting evidence. We discuss our results with respect to the possible different strategies for categorisation and to the possible roles that action can play in perception.

  3. STEREO Mission Design Implementation

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.; Dunham, David W.; Sharer, Peter J.; Hunt, Jack W.; Ray, J. Courtney; Shapiro, Hongxing S.; Ossing, Daniel A.; Eichstedt, John E.

    2007-01-01

    STEREO (Solar-TErrestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA) Science Mission Directorate Sun-Earth Connection theme. This paper describes the successful implementation (lunar swingby targeting) of the mission following the first phasing orbit to deployment into the heliocentric mission orbits following the two lunar swingbys. The STEREO Project had to make some interesting trajectory decisions in order to exploit opportunities to image a bright comet and an unusual lunar transit across the Sun.

  4. The solar stereo mission

    NASA Astrophysics Data System (ADS)

    Rust, D. M.

    The principal scientific objective of the Solar-Terrestrial Relations Observatory (STEREO) is to understand the origin and consequences of coronal mass ejections (CMEs). CMEs are the most energetic eruptions on the Sun. They are responsible for essentially all of the largest solar energetic particle events and are the primary cause of major geomagnetic storms. They may be a critical element in the solar dynamo because they remove the dynamo-generated magnetic flux from the Sun. Two spacecraft at 1 AU from the Sun, one drifting ahead of Earth and one behind, will image CMEs. They will also map the distribution of magnetic fields and plasmas in the heliosphere and accomplish a variety of science goals described in the 1997 report of the NASA Science Definition Team for the STEREO Mission. Current plans call for the two STEREO launches in early 2003. Simultaneous image pairs will be obtained by the STEREO telescopes at gradually increasing spacecraft separations in the course of the mission. Additionally, in-situ measurements will provide accurate information about the state of the ambient solar wind and energetic particle populations ahead of and behind CMEs. These measurements will allow definitive tests of CME and interplanetary shock models. The mission will include a "beacon mode" to warn of either coronal or interplanetary conditions indicative of impending disturbances at Earth.

  5. Stereo matching with Mumford-Shah regularization and occlusion handling.

    PubMed

    Ben-Ari, Rami; Sochen, Nir

    2010-11-01

    This paper addresses the problem of correspondence establishment in binocular stereo vision. We suggest a novel spatially continuous approach for stereo matching based on the variational framework. The proposed method suggests a unique regularization term based on Mumford-Shah functional for discontinuity preserving, combined with a new energy functional for occlusion handling. The evaluation process is based on concurrent minimization of two coupled energy functionals, one for domain segmentation (occluded versus visible) and the other for disparity evaluation. In addition to a dense disparity map, our method also provides an estimation for the half-occlusion domain and a discontinuity function allocating the disparity/depth boundaries. Two new constraints are introduced improving the revealed discontinuity map. The experimental tests include a wide range of real data sets from the Middlebury stereo database. The results demonstrate the capability of our method in calculating an accurate disparity function with sharp discontinuities and occlusion map recovery. Significant improvements are shown compared to a recently published variational stereo approach. A comparison on the Middlebury stereo benchmark with subpixel accuracies shows that our method is currently among the top-ranked stereo matching algorithms.

  6. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  7. The influence of active vision on the exoskeleton of intelligent agents

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  8. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  9. Purposeful gazing in active vision through phase-based disparity and dynamic vergence control

    NASA Astrophysics Data System (ADS)

    Wu, Liwei; Marefat, Michael M.

    1994-10-01

    In this research we propose solutions to the problems involved in gaze stabilization of a binocular active vision system, i.e., vergence error extraction, and vergence servo control. Gazing is realized by decreasing the disparity which represents the vergence error. A Fourier transform based approach that robustly and efficiently estimates vergence disparity is developed for holding gaze on selected visual target. It is shown that this method has certain advantages over existing approaches. Our work also points out that vision sensor based vergence control system is a dual sampling rate system. Feedback information prediction and dynamic vision-based self-tuning control strategy are investigated to implement vergence control. Experiments on the gaze stabilization using the techniques developed in this paper are performed.

  10. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  11. 3D panorama stereo visual perception centering on the observers

    NASA Astrophysics Data System (ADS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-09-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality.

  12. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  13. High Resolution Stereo Camera (HRSC) on Mars Express - a decade of PR/EO activities at Freie Universität Berlin

    NASA Astrophysics Data System (ADS)

    Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian

    2014-05-01

    Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC

  14. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  15. Northern Sinus Meridiani Stereo

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-341, 25 April 2003

    This is a stereo (3-d anaglyph) composite of Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle images of northern Sinus Meridiani near 2oN, 0oW. The light-toned materials at the south (bottom) end of the picture are considered to be thick (100-200 meters; 300-600 ft) exposures of sedimentary rock. Several ancient meteor impact craters are being exhumed from within these layered materials. To view in stereo, use '3-d' glasses with red over the left eye, and blue over the right. The picture covers an area approximately 113 km (70 mi) wide; north is up.

  16. The STEREO Science Center

    NASA Astrophysics Data System (ADS)

    Kaiser, M. L.; Thompson, W. T.; Kucera, T. A.

    2007-05-01

    The STEREO Science Center (SSC), at the NASA Goddard Space Flight Center, is the "one-stop shopping" location for STEREO data, observation plans, analysis software, and links to other mission resources. Along with the other data products, a special "Space Weather Beacon" telemetry stream, relayed through an array of antenna partners coordinated by NOAA, provides near-real-time images, and will soon also provide near-real- time radio and in-situ data. Through interaction with the Solar Software library, the SSC also acts as a focal point for software coordination. The SSC is closely integrated with the Virtual Solar Observatory, making data easily accessible to users. Details on access to the SSC will be given and examples of the various types of data available at the SSC will be shown.

  17. Asynchronous event-based binocular stereo matching.

    PubMed

    Rogister, Paul; Benosman, Ryad; Ieng, Sio-Hoi; Lichtsteiner, Patrick; Delbruck, Tobi

    2012-02-01

    We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events.

  18. Usability of car stereo.

    PubMed

    Razza, Bruno Montanari; Paschoarelli, Luis Carlos

    2012-01-01

    Automotive sound systems vary widely in terms of functions and way of use between different brands and models what can bring difficulties and lack of consistency to the user. This study aimed to analyze the usability of car stereo commonly found in the market. Four products were analyzed by task analysis and after use reports and the results indicate serious usability issues with respect to the form of operation, organization, clarity and quality of information, visibility and readability, among others.

  19. Reduction of computational complexity in the image/video understanding systems with active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-10-01

    The vision system evolved not only as a recognition system, but also as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it became a component of prediction function, allowing creation of environmental models and activity planning. Fast information processing and decision making is vital for any living creature, and requires reduction of informational and computational complexities. The brain achieves this goal using symbolic coding, hierarchical compression, and selective processing of visual information. Network-Symbolic representation, where both systematic structural / logical methods and neural / statistical methods are the parts of a single mechanism, is the most feasible for such models. It converts visual information into the relational Network-Symbolic structures, instead of precise computations of a 3-dimensional models. Narrow foveal vision provides separation of figure from ground, object identification, semantic analysis, and precise control of actions. Rough wide peripheral vision identifies and tracks salient motion, guiding foveal system to salient objects. It also provides scene context. Objects with rigid bodies and other stable systems have coherent relational structures. Hierarchical compression and Network-Symbolic transformations derive more abstract structures that allow invariably recognize a particular structure as an exemplar of class. Robotic systems equipped with such smart vision will be able effectively navigate in any environment, understand situation, and act accordingly.

  20. Visions of the Future. Social Science Activities Text. Teacher's Edition.

    ERIC Educational Resources Information Center

    Melnick, Rob; Ronan, Bernard

    Intended to put both national and global issues into perspective and help students make decisions about their futures, this teacher's edition provides instructional objectives, ideas for discussion and inquiries, test blanks for each section, and answer keys for the 22 activities provided in the accompanying student text. Designed to provide high…

  1. Stereo Matching by Filtering-Based Disparity Propagation.

    PubMed

    Wang, Xingzheng; Tian, Yushi; Wang, Haoqian; Zhang, Yongbing

    2016-01-01

    Stereo matching is essential and fundamental in computer vision tasks. In this paper, a novel stereo matching algorithm based on disparity propagation using edge-aware filtering is proposed. By extracting disparity subsets for reliable points and customizing the cost volume, the initial disparity map is refined through filtering-based disparity propagation. Then, an edge-aware filter with low computational complexity is adopted to formulate the cost column, which makes the proposed method independent on the local window size. Experimental results demonstrate the effectiveness of the proposed scheme. Bad pixels in our output disparity map are considerably decreased. The proposed method greatly outperforms the adaptive support-weight approach and other conditional window-based local stereo matching algorithms.

  2. Stereo Imaging Miniature Endoscope

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Manohara, Harish; White, Victor; Shcheglov, Kirill V.; Shahinian, Hrayr

    2011-01-01

    Stereo imaging requires two different perspectives of the same object and, traditionally, a pair of side-by-side cameras would be used but are not feasible for something as tiny as a less than 4-mm-diameter endoscope that could be used for minimally invasive surgeries or geoexploration through tiny fissures or bores. The proposed solution here is to employ a single lens, and a pair of conjugated, multiple-bandpass filters (CMBFs) to separate stereo images. When a CMBF is placed in front of each of the stereo channels, only one wavelength of the visible spectrum that falls within the passbands of the CMBF is transmitted through at a time when illuminated. Because the passbands are conjugated, only one of the two channels will see a particular wavelength. These time-multiplexed images are then mixed and reconstructed to display as stereo images. The basic principle of stereo imaging involves an object that is illuminated at specific wavelengths, and a range of illumination wavelengths is time multiplexed. The light reflected from the object selectively passes through one of the two CMBFs integrated with two pupils separated by a baseline distance, and is focused onto the imaging plane through an objective lens. The passband range of CMBFs and the illumination wavelengths are synchronized such that each of the CMBFs allows transmission of only the alternate illumination wavelength bands. And the transmission bandwidths of CMBFs are complementary to each other, so that when one transmits, the other one blocks. This can be clearly understood if the wavelength bands are divided broadly into red, green, and blue, then the illumination wavelengths contain two bands in red (R1, R2), two bands in green (G1, G2), and two bands in blue (B1, B2). Therefore, when the objective is illuminated by R1, the reflected light enters through only the left-CMBF as the R1 band corresponds to the transmission window of the left CMBF at the left pupil. This is blocked by the right CMBF. The

  3. STEREO - The Sun from Two Points of View

    NASA Technical Reports Server (NTRS)

    Kucera, Therese A.

    2010-01-01

    NASA's STEREO (Solar TErrestrial RElations Observatory) mission continues its investigations into the three dimensional structure of the sun and heliosphere. With the recent increases in solar activity STEREO is yielding new results obtained using the mission's full array of imaging and in-situ instrumentation, and in February 2011 the two spacecraft will be 180 degrees apart allowing us to directly image the entire solar disk for the first time. We will discuss the latest results from STEREO and how they change our view of solar activity and its effects on our solar system.

  4. Segment-Based Stereo Matching

    DTIC Science & Technology

    1983-06-01

    N JIIU.J, IMlüliHM — o - SEGMENT-BASED STEREO MATCHING* By o Gerard G. Medioni and Ramakant Nevatia Intelligent Systems Group...industrial robotics. Stereo analysis provides a more direct quantitative depth evaluation than techniques such as shape from shad- ing, and its being...surveillance[Henderson79] and industrial robotics. Proposed solutions for the stereo problem follow a paradigm involving the following steps

  5. Owls see in stereo much like humans do.

    PubMed

    van der Willigen, Robert F

    2011-06-10

    While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.

  6. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  7. NCRP Vision for the Future and Program Area Committee Activities.

    PubMed

    Boice, John D

    2017-02-01

    The National Council on Radiation Protection and Measurements (NCRP) believes that the most critical need for the nation in radiation protection is to train, engage, and retain radiation professionals for the future. Not only is the pipeline shrinking, but for some areas there is no longer a pipe! When the call comes to respond, there may be no one to answer the phone! The NCRP "Where are the Radiation Professionals?" initiative, Council Committee (CC) 2, and this year's annual meeting are to focus our efforts to find solutions and not just reiterate the problems. Our next major initiative is CC 1, where the NCRP is making recommendations for the United States on all things dealing with radiation protection. Our last publication was NCRP Report No. 116, Limitation of Exposure to Ionizing Radiation, in 1993-time for an update. NCRP has seven active Program Area Committees on biology and epidemiology, operational concerns, emergency response and preparedness, medicine, environmental issues and waste management, dosimetry, and communications. A major scientific research initiative is the Million Person Study of Low Dose Radiation Health Effects. It includes workers from the Manhattan Project, nuclear weapons test participants (atomic veterans), industrial radiographers, and early medical workers such as radiologists and technologists. This research will answer the one major gap in radiation risk evaluation: what are the health effects when the exposure occurs gradually over time? Other cutting edge initiatives include a re-evaluation of science behind recommendations for lens of the eye dose limits, recommendations for emergency responders on dosimetry after a major radiological incident, guidance to the National Aeronautics and Space Administration with regard to possible central nervous system effects from galactic cosmic rays (the high energy, high mass particles bounding through space), re-evaluating the population exposure to medical radiation (NCRP Report No

  8. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    DTIC Science & Technology

    2010-08-01

    the Model Mixed Density Task CVC Task 3. ANSWERING THE FOUR QUESTIONS OF ACTIVE VISION 3.1. When do the Eyes Move? Modeling Fixation...from two experiments: a mixed density search task and a CVC (consonant-vowel- consonant) search task. The mixed density experiment (Halverson & Hornof...2004b) investigated the effects of varying the visual density of elements in a structured layout. The CVC search experiment (Hornof, 2004

  9. Hearing symptoms personal stereos

    PubMed Central

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-01-01

    Summary Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young. PMID:25991931

  10. Stereo Reconstruction Study

    DTIC Science & Technology

    1983-06-01

    hardware and architec- ture for computer vision, psychology and neurophysiology of vision. DD 2’P 1473 EGiTIoN OF Io NOVs is oSsoLETE UNCLASSIFIED SE U...194 7 PSYCHOLOGY AND NEUROPHYSIOLOGY 7.1 Psychology and Neurophysiology Summaries ....... ................ 196 7.1.1 Parameters...Chapter seven highlights some of the evocative and influential results from neurophysiological and psychological studies of human perception, categorizing

  11. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  12. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  13. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.

    1982-01-01

    The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.

  14. Intelligent robots and computer vision IX: Algorithms and techniques; Proceedings of the Meeting, Boston, MA, Nov. 5-7, 1990

    SciTech Connect

    Casasent, D.P. )

    1991-01-01

    The newest research results, trends, and developments in intelligent robots and computer vision considers topics in pattern recognition for computer vision, image processing, intelligent material handling and vision, novel preprocessing algorithms and hardware, technology for support of intelligent robots and automated systems, fuzzy logic in intelligent systems and computer vision, and segmentation techniques. Attention is given to production quality control problems, recognition in face space, automatic vehicle model identification, active stereo inspection using computer solids models, use of coordinate mapping as a method for image data reduction, integration of a computer vision system with an IBM 7535 robot, fuzzy logic controller structures, supervised pixel classification using a feature space derived from an artificial visual system, and multiresolution segmentation of forward-looking IR and SAR imagery using neural networks.

  15. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  16. Low Vision

    MedlinePlus

    ... USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  17. CauStereo: structure from underwater flickering illumination

    NASA Astrophysics Data System (ADS)

    Swirski, Yohay; Schechner, Yoav Y.

    2012-10-01

    Underwater, in littoral zones, natural illumination typically varies strongly temporally and spatially. Waves on the water surface refract light into the water in spatiotemporal varying intensity. The resulting underwater illumination field forms a caustic network and is known as flicker. Studies in underwater computer vision typically consider flicker to be an undesired effect. In contrast, recent studies1-3 show that the spatiotemporally varying caustic network can be useful for stereoscopic vision, naturally leading to range mapping of the scene. In this paper, we survey these studies. Range triangulation by stereoscopic vision requires the determination of correspondence between image points in different viewpoints. This is typically a difficult problem. However, the spatiotemporal caustic pattern effectively encodes stereo correspondences. Thus, the use of this effect is termed2 CauStereo. The temporal radiance variations due to flicker are unique to each object point. Thus, correspondence of image points per object point becomes unambiguous. A variational optimization formulation is used in practice to find the dense stereo correspondence field. This formulation helps overcome uncertain regions (e.g., due to shadows) and shortens the acquisition time. Limitations of the approach are revealed by ray-tracing simulations. The method was demonstrated by underwater field experiments.2

  18. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  19. A tactile vision substitution system for the study of active sensing.

    PubMed

    Hsu, Brian; Hsieh, Cheng-Han; Yu, Sung-Nien; Ahissar, Ehud; Arieli, Amos; Zilbershtain-Kra, Yael

    2013-01-01

    This paper presents a tactile vision substitution system (TVSS) for the study of active sensing. Two algorithms, namely image processing and trajectory tracking, were developed to enhance the capability of conventional TVSS. Image processing techniques were applied to reduce the artifacts and extract important features from the active camera and effectively converted the information into tactile stimuli with much lower resolution. A fixed camera was used to record the movement of the active camera. A trajectory tracking algorithm was developed to analyze the active sensing strategy of the TVSS users to explore the environment. The image processing subsystem showed advantageous improvement in extracting object's features for superior recognition. The trajectory tracking subsystem, on the other hand, enabled accurately locating the portion of the scene pointed by the active camera and providing profound information for the study of active sensing strategy applied by TVSS users.

  20. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  1. Disparity channels in early vision.

    PubMed

    Roe, Anna W; Parker, Andrew J; Born, Richard T; DeAngelis, Gregory C

    2007-10-31

    The past decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of three-dimensional vision that enhance our daily experience of the world.

  2. What is stereoscopic vision good for?

    NASA Astrophysics Data System (ADS)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  3. Deblocking of mobile stereo video

    NASA Astrophysics Data System (ADS)

    Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen

    2012-02-01

    Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.

  4. Forward-looking activities: incorporating citizens' visions: A critical analysis of the CIVISTI method.

    PubMed

    Gudowsky, Niklas; Peissl, Walter; Sotoudeh, Mahshid; Bechtold, Ulrike

    2012-11-01

    Looking back on the many prophets who tried to predict the future as if it were predetermined, at first sight any forward-looking activity is reminiscent of making predictions with a crystal ball. In contrast to fortune tellers, today's exercises do not predict, but try to show different paths that an open future could take. A key motivation to undertake forward-looking activities is broadening the information basis for decision-makers to help them actively shape the future in a desired way. Experts, laypeople, or stakeholders may have different sets of values and priorities with regard to pending decisions on any issue related to the future. Therefore, considering and incorporating their views can, in the best case scenario, lead to more robust decisions and strategies. However, transferring this plurality into a form that decision-makers can consider is a challenge in terms of both design and facilitation of participatory processes. In this paper, we will introduce and critically assess a new qualitative method for forward-looking activities, namely CIVISTI (Citizen Visions on Science, Technology and Innovation; www.civisti.org), which was developed during an EU project of the same name. Focussing strongly on participation, with clear roles for citizens and experts, the method combines expert, stakeholder and lay knowledge to elaborate recommendations for decision-making in issues related to today's and tomorrow's science, technology and innovation. Consisting of three steps, the process starts with citizens' visions of a future 30-40 years from now. Experts then translate these visions into practical recommendations which the same citizens then validate and prioritise to produce a final product. The following paper will highlight the added value as well as limits of the CIVISTI method and will illustrate potential for the improvement of future processes.

  5. Stereoscopic depth perception for robot vision: algorithms and architectures

    SciTech Connect

    Safranek, R.J.; Kak, A.C.

    1983-01-01

    The implementation of depth perception algorithms for computer vision is considered. In automated manufacturing, depth information is vital for tasks such as path planning and 3-d scene analysis. The presentation begins with a survey of computer algorithms for stereoscopic depth perception. The emphasis is on the Marr-Poggio paradigm of human stereo vision and its computer implementation. In addition, a stereo matching algorithm based on the relaxation labelling technique is examined. A computer architecture designed to efficiently implement stereo matching algorithms, an MIMD array interfaced to a global memory, is presented. 9 references.

  6. Versatile transformations of hydrocarbons in anaerobic bacteria: substrate ranges and regio- and stereo-chemistry of activation reactions†

    PubMed Central

    Jarling, René; Kühner, Simon; Basílio Janke, Eline; Gruner, Andrea; Drozdowska, Marta; Golding, Bernard T.; Rabus, Ralf; Wilkes, Heinz

    2015-01-01

    Anaerobic metabolism of hydrocarbons proceeds either via addition to fumarate or by hydroxylation in various microorganisms, e.g., sulfate-reducing or denitrifying bacteria, which are specialized in utilizing n-alkanes or alkylbenzenes as growth substrates. General pathways for carbon assimilation and energy gain have been elucidated for a limited number of possible substrates. In this work the metabolic activity of 11 bacterial strains during anaerobic growth with crude oil was investigated and compared with the metabolite patterns appearing during anaerobic growth with more than 40 different hydrocarbons supplied as binary mixtures. We show that the range of co-metabolically formed alkyl- and arylalkyl-succinates is much broader in n-alkane than in alkylbenzene utilizers. The structures and stereochemistry of these products are resolved. Furthermore, we demonstrate that anaerobic hydroxylation of alkylbenzenes does not only occur in denitrifiers but also in sulfate reducers. We propose that these processes play a role in detoxification under conditions of solvent stress. The thermophilic sulfate-reducing strain TD3 is shown to produce n-alkylsuccinates, which are suggested not to derive from terminal activation of n-alkanes, but rather to represent intermediates of a metabolic pathway short-cutting fumarate regeneration by reverse action of succinate synthase. The outcomes of this study provide a basis for geochemically tracing such processes in natural habitats and contribute to an improved understanding of microbial activity in hydrocarbon-rich anoxic environments. PMID:26441848

  7. Recent STEREO Observations of Coronal Mass Ejections

    NASA Technical Reports Server (NTRS)

    SaintCyr, Chris Orville; Xie, Hong; Mays, Mona Leila; Davila, Joseph M.; Gilbert, Holly R.; Jones, Shaela I.; Pesnell, William Dean; Gopalswamy, Nat; Gurman, Joseph B.; Yashiro, Seiji; Wuelser, Jean-Pierre; Howard, Russell A.; Thompson, Barbara J.; Thompson, William T.

    2008-01-01

    Over 400 CMEs have been observed by STEREO SECCHI COR1 during the mission's three year duration (2006-2009). Many of the solar activity indicators have been at minimal values over this period, and the Carrington rotation-averaged CME rate has been comparable to that measured during the minima between Cycle 21-22 (SMM C/P) and Cycle 22-23 (SOHO LASCO). That rate is about 0.5 CMEs/day. During the current solar minimum (leading to Cycle 24), there have been entire Carrington rotations where no sunspots were detected and the daily values of the 2800 MHz solar flux remained below 70 sfu. CMEs continued to be detected during these exceptionally quiet periods, indicating that active regions are not necessary to the generation of at least a portion of the CME population. In the past, researchers were limited to a single view of the Sun and could conclude that activity on the unseen portion of the disk might be associated with CMEs. But as the STEREO mission has progressed we have been able to observe an increasing fraction of the Sun's corona with STEREO SECCHI EUVI and were able to eliminate this possibility. Here we report on the nature of CMEs detected during these exceptionally quiet periods, and we speculate on how the corona remains dynamic during such conditions.

  8. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  9. #3 STEREO - Approaching 360 Degrees

    NASA Video Gallery

    As the STEREO spacecraft have moved out on either side of Earth they have imaged more and more of the Sun's surface. This video shows how our coverage of the Sun has increased. The Sun is shown as ...

  10. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch.

  11. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  12. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  13. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  14. Stereo matching based on census transformation of image gradients

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Karras, G.; Petsa, E.

    2015-05-01

    Although multiple-view matching provides certain significant advantages regarding accuracy, occlusion handling and radiometric fidelity, stereo-matching remains indispensable for a variety of applications; these involve cases when image acquisition requires fixed geometry and limited number of images or speed. Such instances include robotics, autonomous navigation, reconstruction from a limited number of aerial/satellite images, industrial inspection and augmented reality through smart-phones. As a consequence, stereo-matching is a continuously evolving research field with growing variety of applicable scenarios. In this work a novel multi-purpose cost for stereo-matching is proposed, based on census transformation on image gradients and evaluated within a local matching scheme. It is demonstrated that when the census transformation is applied on gradients the invariance of the cost function to changes in illumination (non-linear) is significantly strengthened. The calculated cost values are aggregated through adaptive support regions, based both on cross-skeletons and basic rectangular windows. The matching algorithm is tuned for the parameters in each case. The described matching cost has been evaluated on the Middlebury stereo-vision 2006 datasets, which include changes in illumination and exposure. The tests verify that the census transformation on image gradients indeed results in a more robust cost function, regardless of aggregation strategy.

  15. The zone of comfort: Predicting visual discomfort with stereo displays.

    PubMed

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M; Banks, Martin S

    2011-07-21

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence-accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema.

  16. The zone of comfort: Predicting visual discomfort with stereo displays

    PubMed Central

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  17. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  18. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  19. Evolution of activity patterns and chromatic vision in primates: morphometrics, genetics and cladistics.

    PubMed

    Heesy, C P; Ross, C F

    2001-02-01

    Hypotheses for the adaptive origin of primates have reconstructed nocturnality as the primitive activity pattern for the entire order based on functional/adaptive interpretations of the relative size and orientation of the orbits, body size and dietary reconstruction. Based on comparative data from extant taxa this reconstruction implies that basal primates were also solitary, faunivorous, and arboreal. Recently, primates have been hypothesized to be primitively diurnal, based in part on the distribution of color-sensitive photoreceptor opsin genes and active trichromatic color vision in several extant strepsirrhines, as well as anthropoid primates (Tan & Li, 1999 Nature402, 36; Li, 2000 Am. J. phys. Anthrop. Supple.30, 318). If diurnality is primitive for all primates then the functional and adaptive significance of aspects of strepsirrhine retinal morphology and other adaptations of the primate visual system such as high acuity stereopsis, have been misinterpreted for decades. This hypothesis also implies that nocturnality evolved numerous times in primates. However, the hypothesis that primates are primitively diurnal has not been analyzed in a phylogenetic context, nor have the activity patterns of several fossil primates been considered. This study investigated the evolution of activity patterns and trichromacy in primates using a new method for reconstructing activity patterns in fragmentary fossils and by reconstructing visual system character evolution at key ancestral nodes of primate higher taxa. Results support previous studies that reconstruct omomyiform primates as nocturnal. The larger body sizes of adapiform primates confound inferences regarding activity pattern evolution in this group. The hypothesis of diurnality and trichromacy as primitive for primates is not supported by the phylogenetic data. On the contrary, nocturnality and dichromatic vision are not only primitive for all primates, but also for extant strepsirrhines. Diurnality, and

  20. Analysis and design of stereoscopic display in stereo television endoscope system

    NASA Astrophysics Data System (ADS)

    Feng, Dawei

    2008-12-01

    Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.

  1. Robust Photo-Topography by Fusing Shape-from-Shading and Stereo

    DTIC Science & Technology

    1993-02-01

    CODE • fusion 17. SECURITY CLASSIFICATION IS SECURITY CLASSIFICATION 19 SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF...and shape-from-shading) and would be applicable to images with similar lighting. Another related paper is the shape-from-shading and stereo fusion ...visual images ). 1.2.2 Relation of this thesis to vision fusion schemes The computer vision literature is roughly segmented along module boundaries. These 0

  2. Vision-based localization in urban environments

    NASA Astrophysics Data System (ADS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-05-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory has developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by the stereo pair. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations. For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of three primary components. The first is a stereo-based visual odometry system that calculates the 6-degree of freedom camera motion between sequential frames. The second component uses a set of heuristics to identify straight-line segments that are likely to be part of a building exterior. Ranging to these straight-line features is computed using binocular or wide-baseline stereo. The resulting features and the associated range measurements are fed to the third software component, a particle-filter based localization system. This system uses the map and the most recent results from the first two to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and describes the results of applying the system to the global localization of a camera system over an approximately half-kilometer traverse across JPL

  3. Phobos in Stereo

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter took two images of the larger of Mars' two moons, Phobos, within 10 minutes of each other on March 23, 2008. This view combines the two images. Because the two were taken at slightly different viewing angles, this provides a three-dimensional effect when seen through red-blue glasses (red on left eye).

    The illuminated part of Phobos seen here is about 21 kilometers (13 miles) across. The most prominent feature is the large crater Stickney at the bottom of the image. With a diameter of 9 kilometers (5.6 miles), it is the largest feature on Phobos. A series of troughs and crater chains is obvious on other parts of the moon. Although many appear radial to Stickney in this image, recent studies from the European Space Agency's Mars Express orbiter indicate that they are not related to Stickney. Instead, they may have formed when material ejected from impacts on Mars later collided with Phobos. The lineated textures on the walls of Stickney and other large craters are landslides formed from materials falling into the crater interiors in the weak Phobos gravity (less than one one-thousandth of the gravity on Earth).

    This stereo view combines images in the HiRISE catalog as PSP_007769_9010 (in red here) and PSP_007769_9015 (in blue here).

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo.

  4. A binocular stereo approach to AR/C at the Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Smith, Alan T.

    1991-01-01

    Automated Rendezvous and Capture requires the determination of the 6 DOF relating two free bodies. Sensor systems that can provide such information have varying sizes, weights, power requirements, complexities, and accuracies. One type of sensor system that can provide several key advantages is a binocular stereo vision system.

  5. Screening and sampling in studies of binocular vision.

    PubMed

    Heron, Suzanne; Lages, Martin

    2012-06-01

    Binocular deficits are relatively common within a typical sample of observers. This has implications for research on binocular vision, as a variety of stereo deficits can affect performance. Despite this, there is no agreed standard for testing stereo capabilities in observers and many studies do not report visual abilities at all. Within the stereo literature, failure to report screening and sampling has the potential to undermine the results of otherwise strictly controlled research. We reviewed research articles on binocular vision published in three journals between 2000 and 2008 to illustrate how screening for binocular deficits and sampling of participants is approached. Our results reveal that 44% of the studies do not mention screening for stereo deficits and 91% do not report selection of participants. The percentage of participants excluded from studies that report stereo screening amounts to 3.9% and 0.7% for studies that do not report stereo screening. These low numbers contrast with the exclusion of 17.6% of participants in studies that report screening for binocular deficits as well as selection of participants. We discuss various options for stereo testing and the need for stereo-motion testing with reference to recent research on binocular perception.

  6. Photometric invariant stereo matching method.

    PubMed

    Gu, Feifei; Zhao, Hong; Zhou, Xiang; Li, Jinjun; Bu, Penghui; Zhao, Zixin

    2015-12-14

    A robust stereo matching method based on a comprehensive mathematical model for color formation process is proposed to estimate the disparity map of stereo images with noise and photometric variations. The band-pass filter with DoP kernel is firstly used to filter out noise component of the stereo images. Then the log-chromaticity normalization process is applied to eliminate the influence of lightning geometry. All the other factors that may influence the color formation process are removed through the disparity estimation process with a specific matching cost. Performance of the developed method is evaluated by comparing with some up-to-date algorithms. Experimental results are presented to demonstrate the robustness and accuracy of the method.

  7. How to Read a NASA STEREO Image

    NASA Video Gallery

    NASA’s STEREO mission observed a coronal mass ejection on July 23, 2012 – one of the fastest CMEs on record. The video uses STEREO imagery from this rare event to describe features to pay attention...

  8. #1 Stereo Orbit - Launch to Feb 2011

    NASA Video Gallery

    The STEREO mission consists of two spacecraft orbiting the Sun, one moving a bit faster than Earth and the other a bit slower. In the time since the STEREO spacecraft entered these orbits near the ...

  9. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  10. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  11. Easy Projection of Stereo Movies.

    DTIC Science & Technology

    1986-05-01

    stereo films . This apparatus is easily portable and has been tsted over the past few years with a large variety of commercial movie projectors. It C...transparency of even black frames of film in the infrared, the unit remains synchronized throughout the movie . P. The voltage required for the PI.ZT wafer is...7D-W6 99 EASY PROJECTION OF STEREO MOVIES (U) CALIFORNIA UNIV SAN Il1 DIEGO LA JOLLA DEPT OF CHEMISTRY N BARTLETT ET AL. S1 NAY 86 N99914-?8-C-1325

  12. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vetrone, A. V.; Martin, M. D.

    1980-01-01

    The extremely long missions of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of which can be used to form stereo images allowing the earth-bound student of Mars to examine the subject in 3-D. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set. Since that data set is still growing (January, 1980, about 3 1/2 years after the mission began), a second edition of this catalog is planned with completion expected about November, 1980.

  13. Real time swallowing measurement system by using photometric stereo

    NASA Astrophysics Data System (ADS)

    Fujino, Masahiro; Kato, Kunihito; Mura, Emi; Nagai, Hajime

    2015-04-01

    In this paper, we propose a measurement system to evaluate the swallowing by estimating the movement of the thyroid cartilage. We developed a measurement system based on the vision sensor in order to achieve the noncontact and non-invasive sensor. The movement of the subject's thyroid cartilage is tracked by the three dimensional information of the surface of the skin measured by the photometric stereo. We constructed a camera system that uses near-IR light sources and three camera sensors. We conformed the effectiveness of the proposed system by experiments.

  14. Key characteristics of specular stereo.

    PubMed

    Muryy, Alexander A; Fleming, Roland W; Welchman, Andrew E

    2014-12-24

    Because specular reflection is view-dependent, shiny surfaces behave radically differently from matte, textured surfaces when viewed with two eyes. As a result, specular reflections pose substantial problems for binocular stereopsis. Here we use a combination of computer graphics and geometrical analysis to characterize the key respects in which specular stereo differs from standard stereo, to identify how and why the human visual system fails to reconstruct depths correctly from specular reflections. We describe rendering of stereoscopic images of specular surfaces in which the disparity information can be varied parametrically and independently of monocular appearance. Using the generated surfaces and images, we explain how stereo correspondence can be established with known and unknown surface geometry. We show that even with known geometry, stereo matching for specular surfaces is nontrivial because points in one eye may have zero, one, or multiple matches in the other eye. Matching features typically yield skew (nonintersecting) rays, leading to substantial ortho-epipolar components to the disparities, which makes deriving depth values from matches nontrivial. We suggest that the human visual system may base its depth estimates solely on the epipolar components of disparities while treating the ortho-epipolar components as a measure of the underlying reliability of the disparity signals. Reconstructing virtual surfaces according to these principles reveals that they are piece-wise smooth with very large discontinuities close to inflection points on the physical surface. Together, these distinctive characteristics lead to cues that the visual system could use to diagnose specular reflections from binocular information.

  15. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  16. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  17. Solving the interface problem for Windows stereo applications

    NASA Astrophysics Data System (ADS)

    Halnon, Jeff; Milici, Dave

    1998-04-01

    The most common type of electronic stereoscopic viewing devices available are LC (Liquid Crystal) shutter glasses, such as CrystalEyes made by StereoGraphics Corp. These type of stereo glasses work by alternating each eye's shutter in sync with a left or right display field. In order to support this technology on PCs, StereoGraphics has been actively working with hardware display vendors, software developers, and VESA (Video Electronic Standards Association) to establish standard stereoscopic display interfaces. With Microsoft licensing OpenGL for Windows NT systems and developing their own DirectX software architecture for Windows 9x, a variety of 3D accelerator boards are now available with 3D rendering capabilities which were previously only available on proprietary graphics workstations. Some of these graphics controllers contain stereoscopic display support for automatic page-flipping of left/right images. The paper describes low-level stereoscopic display support included in VESA BIOS Extension Version 3 (VBE 3.0), the VESA standard stereoscopic interface connector, the GL_STEREO quad buffer model specified in OpenGL v1.1, and a proposal of a FlipStereo() API extension to Microsoft DirectX specification.

  18. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  19. Associations between platelet monoamine oxidase-B activity and acquired colour vision loss in a fish-eating population.

    PubMed

    Stamler, Christopher John; Mergler, Donna; Abdelouahab, Nadia; Vanier, Claire; Chan, Hing Man

    2006-01-01

    Platelet monoamine oxidase-B (MAO-B) has been considered a surrogate biochemical marker of neurotoxicity, as it may reflect changes in the monoaminergic system in the brain. Colour vision discrimination, in part a dopamine dependent process, has been used to identify early neurological effects of some environmental and industrial neurotoxicants. The objective of this cross-sectional study was to explore the relationship between platelet MAO-B activity and acquired colour discrimination capacity in fish-consumers from the St. Lawrence River region of Canada. Assessment of acquired dyschromatopsia was determined using the Lanthony D-15 desaturated panel test. Participants classified with dyschromatopsia (n=81) had significantly lower MAO-B activity when compared to those with normal colour vision (n=32) (26.5+/-9.6 versus 31.0+/-9.9 nmol/min/20 microg, P=0.030)). Similarly, Bowman's Colour Confusion Index (CCI) was inversely correlated with MAO-B activity when the vision test was performed with the worst eye only (r=-0.245, P=0.009), the best eye only (r=-0.188, P=0.048) and with both eyes together (r=-0.309, P=0.001). Associations remained significant after adjustment for age and gender when both eyes (P=0.003) and the worst eye (P=0.045) were tested. Adjustment for heavy smoking weakened the association between MAO-B and CCI in the worst eye (P=0.140), but did not alter this association for both eyes (P=0.006). Adjustment for blood-mercury concentrations did not change the association. This study suggests a relationship between reduced MAO-B activity and acquired colour vision loss and both are associated with tobacco smoking. Therefore, results show that platelet MAO-B may be used as a surrogate biochemical marker of acquired colour vision loss.

  20. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  1. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity.

  2. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  3. Image/video understanding systems based on network-symbolic models and active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-07-01

    Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.

  4. Solid state active/passive night vision imager using continuous-wave laser diodes and silicon focal plane arrays

    NASA Astrophysics Data System (ADS)

    Vollmerhausen, Richard H.

    2013-04-01

    Passive imaging offers covertness and low power, while active imaging provides longer range target acquisition without the need for natural or external illumination. This paper describes a focal plane array (FPA) concept that has the low noise needed for state-of-the-art passive imaging and the high-speed gating needed for active imaging. The FPA is used with highly efficient but low-peak-power laser diodes to create a night vision imager that has the size, weight, and power attributes suitable for man-portable applications. Video output is provided in both the active and passive modes. In addition, the active mode is Class 1 eye safe and is not visible to the naked eye or to night vision goggles.

  5. Statistical Building Roof Reconstruction from WORLDVIEW-2 Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Huang, H.; Krauß, T.; Mayer, H.; Reinartz, P.

    2015-03-01

    3D building reconstruction from point clouds is an active research topic in remote sensing, photogrammetry and computer vision. Most of the prior research has been done on 3D building reconstruction from LiDAR data which means high resolution and dense data. The interest of this work is 3D building reconstruction from Digital Surface Models (DSM) of stereo image matching of space borne satellite data which cover larger areas than LiDAR datasets in one data acquisition step and can be used also for remote regions. The challenging problem is the noise of this data because of low resolution and matching errors. In this paper, a top-down and bottom-up method is developed to find building roof models which exhibit the optimum fit to the point clouds of the DSM. In the bottom up step of this hybrid method, the building mask and roof components such as ridge lines are extracted. In addition, in order to reduce the computational complexity and search space, roofs are classified to pitched and flat roofs as well. Ridge lines are utilized to estimate the roof primitives from a building library such as width, length, positions and orientation. Thereafter, a topdown approach based on Markov Chain Monte Carlo and simulated annealing is applied to optimize roof parameters in an iterative manner by stochastic sampling and minimizing the average of Euclidean distance between point cloud and model surface as fitness function. Experiments are performed on two areas of Munich city which include three roof types (hipped, gable and flat roofs). The results show the efficiency of this method in even for this type of noisy datasets.

  6. Viewing The Entire Sun With STEREO And SDO

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.

    2011-05-01

    On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.

  7. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  8. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  9. Motion vision for mobile robots

    NASA Astrophysics Data System (ADS)

    Herrb, Matthieu

    The problem of using computer vision in mobile robots is dealt with. The datacube specialized cards and a parallel machine using a transputer network are studied. The tracking and localization of a three dimensional object in a sequence of images is examined, using first order prediction of the motion in the image plane and verification by a maximal clique search in the graph of mutually compatible matchings. A dynamic environment modeling module, using numerical fusion between trinocular stereovision and tracking of stereo matched primitives is presented. The integration of this perception system in the control architecture of a mobile robot is examined to achieve various functions, such as vision servo motion and environment modeling. The functional units implementing vision tasks and the data exchanged with other units are outlined. Experiments realized with the mobile robot Hilare 1.5 allowed the proposed algorithms and concepts to be validated.

  10. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  11. Why Stereo Vision is Not Always about 3D Reconstruction

    DTIC Science & Technology

    1993-07-01

    has been assumed that measuring the dispar- matching features, then using trigonometry to convert ity is trivial, and that solving for the distance...a simple version of a fixation mechanism, in exploredl in the literature. primarily for obstacle avoid- which the trigger feature is foveated and

  12. The analysis on optical property for stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Zong-ming; Ye, Dong; Zhang, Yu; Lu, Shan; Cao, Shu-qing

    2016-01-01

    In the relative measurement for the space non-cooperative target, the analysis to the optical property of the target is one of premises of the sensor design. The article is targeted on GEO satellites. From the perspective of photometry and based on the blackbody radiation law, we analyze the visible light energy of the sun outside the atmosphere, and consider the impact of satellite thermal control multilayer, model the luminosity feature related to the solar incident angle and the sensor observing angle. Finally we get the equivalent visual magnitude of the target satellite at the pupil of the camera. Our research could effectively direct the design and development of the visible relative measurement sensor.

  13. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  14. Lambda Vision

    NASA Astrophysics Data System (ADS)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  15. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  16. Parallel vision algorithms. Annual technical report No. 2, 1 October 1987-28 December 1988

    SciTech Connect

    Ibrahim, H.A.; Kender, J.R.; Brown, L.G.

    1989-01-01

    This Second Annual Technical Report covers the project activities during the period from October 1, 1987 through December 31, 1988. The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms can serve as front-end components of autonomous land-vehicle vision systems. During the second year of the project, efforts concentrated on the following: first, implementing and testing on the Connection Machine the parallel programming environment that will be used to develop, implement and test our parallel vision algorithms; second, implementing and testing primitives for the multi-resolution stereo and texture algorithms, in this environment. Also, efforts were continued to refine techniques used in the texture algorithms, and to develop a system that integrates information from several shape-from-texture methods. This report describes the status and progress of these efforts. The authors describe first the programming environment implementation, and how to use it. They summarize the results for multi-resolution based depth-interpolation algorithms on parallel architectures. Then, they present algorithms and test results for the texture algorithms. Finally, the results of the efforts of integrating information from various shape-from-texture algorithms are presented.

  17. The STEREO/IMPACT Magnetic Field Experiment

    NASA Astrophysics Data System (ADS)

    Acuña, M. H.; Curtis, D.; Scheifele, J. L.; Russell, C. T.; Schroeder, P.; Szabo, A.; Luhmann, J. G.

    2008-04-01

    The magnetometer on the STEREO mission is one of the sensors in the IMPACT instrument suite. A single, triaxial, wide-range, low-power and noise fluxgate magnetometer of traditional design—and reduced volume configuration—has been implemented in each spacecraft. The sensors are mounted on the IMPACT telescoping booms at a distance of ˜3 m from the spacecraft body to reduce magnetic contamination. The electronics have been designed as an integral part of the IMPACT Data Processing Unit, sharing a common power converter and data/command interfaces. The instruments cover the range ±65,536 nT in two intervals controlled by the IDPU (±512 nT; ±65,536 nT). This very wide range allows operation of the instruments during all phases of the mission, including Earth flybys as well as during spacecraft test and integration in the geomagnetic field. The primary STEREO/IMPACT science objectives addressed by the magnetometer are the study of the interplanetary magnetic field (IMF), its response to solar activity, and its relationship to solar wind structure. The instruments were powered on and the booms deployed on November 1, 2006, seven days after the spacecraft were launched, and are operating nominally. A magnetic cleanliness program was implemented to minimize variable spacecraft fields and to ensure that the static spacecraft-generated magnetic field does not interfere with the measurements.

  18. Optics, illumination, and image sensing for machine vision III; Proceedings of the Meeting, Cambridge, MA, Nov. 8, 9, 1988

    SciTech Connect

    Svetkoff, D.J.

    1989-01-01

    Various papers on optics, illumination, and image sensing for machine vision are presented. Some of the optics discussed include: illumination and imaging of moving objects, strobe illumination systems for machine vision, optical collision timer, new electrooptical coordinate measurement system, flexible and piezoresistive touch sensing array, selection of cameras for machine vision, custom fixed-focal length versus zoom lenses, performance of optimal phase-only filters, minimum variance SDF design using adaptive algorithms, Ho-Kashyap associative processors, component spaces for invariant pattern recognition, grid labeling using a marked grid, illumination-based model of stochastic textures, color-encoded moire contouring, noise measurement and suppression in active 3-D laser-based imaging systems, structural stereo matching of Laplacian-of-Gaussian contour segments for 3D perception, earth surface recovery from remotely sensed images, and shape from Lambertian photometric flow fields.

  19. How to assess vision.

    PubMed

    Marsden, Janet

    2016-09-21

    Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision.

  20. A realization of semi-global matching stereo algorithm on GPU for real-time application

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Chen, He-ping

    2011-11-01

    Real-time stereo vision systems have many applications such as automotive and robotics. According to the Middlebury Stereo Database, Semi-Global Matching (SGM) is commonly regarded as the most efficient algorithm among the top-performing stereo algorithms. Recently, most effective real-time implementations of this algorithm are based on reconfigurable hardware (FPGA). However, with the development of General-Purpose computation on Graphics Processing Unit, an effective real-time implementation on general purpose PCs can be expected. In this paper, a real-time SGM realization on Graphics Processing Unit (GPU) is introduced. CUDA, a general purpose parallel computing architecture introduced by NVIDIA in November 2006, has been used to realize the algorithm. Some important optimizations according to CUDA and Fermi (the latest architecture of NVIDA GPUs) are also introduced in this paper.

  1. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  2. Research on algorithm about content-based segmentation and spatial transformation for stereo panorama

    NASA Astrophysics Data System (ADS)

    Li, Zili; Xia, Xuezhi; Zhu, Guangxi; Zhu, Yaoting

    2004-03-01

    The principle to construct G&IBMR virtual scene based on stereo panorama with binocular stereovision was put forward. Closed cubic B-splines have been used for content-based segmentation to virtual objects of stereo panorama and all objects in current viewing frustum would be ordered in current object linked list (COLL) by their depth information. The formula has been educed to calculate the depth information of a point in virtual scene by the parallax based on a parallel binocular vision model. A bilinear interpolation algorithm has been submitted to deform the segmentation template and take image splicing between three key positions. We also use the positional and directional transformation of binocular virtual camera bound to user avatar to drive the transformation of stereo panorama so as to achieve real-time consistency about perspective relationship and image masking. The experimental result has shown that the algorithm in this paper is effective and feasible.

  3. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  4. Efficient view synthesis from uncalibrated stereo

    NASA Astrophysics Data System (ADS)

    Braspenning, Ralph; Op de Beeck, Marc

    2006-02-01

    For multiview auto-stereoscopic 3D displays, available stereo content needs to be converted to multiview content. In this paper we present a method to efficiently synthesize new views based on the two existing views from the stereo input. This method can be implemented in real-time and is also capable of handling uncalibrated stereo input. Good performance is shown compared to state-of-the-art disparity estimation algorithms and view rendering methods.

  5. The STEREO Science Center after Launch

    NASA Astrophysics Data System (ADS)

    Thompson, William T.

    2007-05-01

    The STEREO Science Center (SSC), at the NASA Goddard Space Flight Center, is the "one-stop shopping" location for STEREO data, observation plans, analysis software, and links to other mission resources. STEREO was launched on October 25, and the SSC is now providing most of the services that it was designed for. We report on the progress of the space weather beacon processing, data archiving, and the interaction with the Virtual Solar Observatory.

  6. Self-reported visual impairment and impact on vision-related activities in an elderly Nigerian population: report from the Ibadan Study of Ageing

    PubMed Central

    Bekibele, CO; Gureje, Oye

    2010-01-01

    Background Studies have shown an association between visual impairment and poor overall function. Studies from Africa and developing countries show high prevalence of visual impairment. More information is needed on the community prevalence and impact of visual impairment among elderly Africans. Methods A multi-stage stratified sampling of households was implemented to select persons aged 65 years and over in the south-western and north-central parts of Nigeria. Impairments of distant and near vision were based on subjective self-reports obtained with the use of items derived from the World Health Organization multi-country World Health Survey questionnaire. Impairment was defined as reporting much difficulty to questions on distant and near vision. Disabilities in activities of daily living (ADL) and instrumental activities of daily living (IADL) were evaluated by interview, using standardized scales. Results A total of 2054 subjects 957 (46.6%) males and 1097 (53.4) females responded to the questions on vision. 22% (n=453) of the respondents reported distant vision impairment, and 18% (n=377) reported near vision impairment (not mutually exclusive). 15% (n= 312) however reported impairment for both far and near vision. Impairment of distant vision increased progressively with age (P < 0.01). Persons with self reported near vision impairment had elevated risk of functional disability in several IADLs and ADLs than those with out. Distant vision impairment was less associated with role limitations in both ADLs and IADLs. Conclusion The prevalence of self reported distant visual impairment was high but that for near visual impairment was less than expected in this elderly African population. Impairment of near vision was found to carry with it a higher burden of functional disability than that of distant vision. PMID:18780258

  7. A Poet's Vision.

    ERIC Educational Resources Information Center

    Marshall, Suzanne; Newman, Dan

    1997-01-01

    Describes a series of activities to help middle school students develop an artist's vision and then convey that vision through poetry. Describes how lessons progress from looking at concrete objects to observations of settings and characters, gradually adding memory and imagination to direct observation, and finishing with revision. Notes that…

  8. Subjective evaluations of multiple three-dimensional displays by a stereo-deficient viewer: an interesting case study

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Ellis, Sharon A.; Harrington, Lawrence K.; Havig, Paul R.

    2014-06-01

    A study was conducted with sixteen observers evaluating four different three-dimensional (3D) displays for usability, quality, and physical comfort. One volumetric display and three different stereoscopic displays were tested. The observers completed several different types of questionnaires before, during and after each test session. All observers were tested for distance acuity, color vision, and stereoscopic acuity. One observer in particular appeared to have either degraded or absent binocular vision on the stereo acuity test. During the subjective portions of the data collection, this observer showed no obvious signs of depth perception problems and finished the study with no issues reported. Upon further post-hoc stereovision testing of this observer, we discovered that he essentially failed all tests requiring depth judgments of fine disparity and had at best only gross levels of stereoscopic vision (failed all administered stereoacuity threshold tests, testing up to about 800 arc sec of disparity). When questioned about this, the stereo-deficiency was unknown to the observer, who reported having seen several stereoscopic 3D movies (and enjoyed the 3D experiences). Interestingly, we had collected subjective reports about the quality of three-dimensional imagery across multiple stereoscopic displays from a person with deficient stereo-vision. We discuss the participant's unique pattern of results and compare and contrast these results with the other stereo-normal participants. The implications for subjective measurements on stereoscopic three-dimensional displays and for subjective display measurement in general are considered.

  9. Vision problems

    MedlinePlus

    ... shade or curtain hanging across part of your visual field. Optic neuritis : inflammation of the optic nerve ... Impaired vision; Blurred vision Images Crossed eyes Eye Visual acuity test Slit-lamp exam Visual field test ...

  10. Stereo disparity facilitates view generalization during shape recognition for solid multipart objects.

    PubMed

    Cristino, Filipe; Davitt, Lina; Hayward, William G; Leek, E Charles

    2015-01-01

    Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.

  11. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  12. Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Davila, Joseph M.; SaintCyr, O. C.

    2003-01-01

    The solar magnetic field is constantly generated beneath the surface of the Sun by the solar dynamo. To balance this flux generation, there is constant dissipation of magnetic flux at and above the solar surface. The largest phenomenon associated with this dissipation is the Coronal Mass Ejection (CME). The Solar and Heliospheric Observatory (SOHO) has provided remarkable views of the corona and CMEs, and served to highlight how these large interplanetary disturbances can have terrestrial consequences. STEREO is the next logical step to study the physics of CME origin, propagation, and terrestrial effects. Two spacecraft with identical instrument complements will be launched on a single launch vehicle in November 2007. One spacecraft will drift ahead and the second behind the Earth at a separation rate of 22 degrees per year. Observation from these two vantage points will for the first time allow the observation of the three-dimensional structure of CMEs and the coronal structures where they originate. Each STEREO spacecraft carries a complement of 10 instruments, which include (for the first time) an extensive set of both remote sensing and in-situ instruments. The remote sensing suite is capable of imaging CMEs from the solar surface out to beyond Earth's orbit (1 AU), and in-situ instruments are able to measure distribution functions for electrons, protons, and ions over a broad energy range, from the normal thermal solar wind plasma to the most energetic solar particles. It is anticipated that these studies will ultimately lead to an increased understanding of the CME process and provide unique observations of the flow of energy from the corona to the near-Earth environment. An international research program, the International Heliophysical Year (IHY) will provide a framework for interpreting STEREO data in the context of global processes in the Sun-Earth system.

  13. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  14. Small Boats in an Ocean of School Activities: Towards a European Vision on Education

    ERIC Educational Resources Information Center

    Villalba, Ernesto

    2008-01-01

    The paper discusses the concept of schools as "multi-purpose learning centres", proposed by the European Commission in the year 2000 as part of the Lisbon Strategy to improve competitiveness. This concept was arguably the "European vision" for school education and was meant to drive the modernization of school education.…

  15. The Stereo-Electroencephalography Methodology.

    PubMed

    Alomar, Soha; Jones, Jaes; Maldonado, Andres; Gonzalez-Martinez, Jorge

    2016-01-01

    The stereo-electroencephalography (SEEG) methodology and technique was developed almost 60 years ago in Europe. The efficacy and safety of SEEG has been proven. The main advantage is the possibility to study the epileptogenic neuronal network in its dynamic and 3-dimensional aspect, with optimal time and space correlation, with the clinical semiology of the patient's seizures. The main clinical challenge for the near future remains in the further refinement of specific selection criteria for the different methods of invasive monitoring, with the ultimate goal of comparing and validating the results (long-term seizure-free outcome) obtained from different methods of invasive monitoring.

  16. Stereo imaging based particle velocimeter

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1994-01-01

    Three dimensional coordinates of an object are determined from it's two dimensional images for a class of points on the object. Two dimensional images are first filtered by a Laplacian of Gaussian (LOG) filter in order to detect a set of feature points on the object. The feature points on the left and the right images are then matched using a Hopfield type optimization network. The performance index of the Hopfield network contains both local and global properties of the images. Parallel computing in stereo matching can be achieved by the proposed methodology.

  17. Stereo Pair, Patagonia, Argentina

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This view of northern Patagonia, at Los Menucos, Argentina shows remnants of relatively young volcanoes built upon an eroded plain of much older and contorted volcanic, granitic, and sedimentary rocks. The large purple, brown, and green 'butterfly' pattern is a single volcano that has been deeply eroded. Large holes on the volcano's flanks indicate that they may have collapsed soon after eruption, as fluid molten rock drained out from under its cooled and solidified outer shell. At the upper left, a more recent eruption occurred and produced a small volcanic cone and a long stream of lava, which flowed down a gully. At the top of the image, volcanic intrusions permeated the older rocks resulting in a chain of small dark volcanic peaks. At the top center of the image, two halves of a tan ellipse pattern are offset from each other. This feature is an old igneous intrusion that has been split by a right-lateral fault. The apparent offset is about 6.6 kilometers (4 miles). Color, tonal, and topographic discontinuities reveal the fault trace as it extends across the image to the lower left. However, young unbroken basalt flows show that the fault has not been active recently.

    This cross-eyed stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with an enhanced Landsat 7satellite color image. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. In doing so, each point in the image is shifted slightly, depending on its elevation. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.

    Landsat satellites have provided visible light and infrared images of the Earth continuously since 1972. SRTM topographic data match the 30-meter (99-foot) spatial resolution of most Landsat images and provide a valuable complement for studying the historic and growing Landsat data archive

  18. Is binocular vision worth considering in people with low vision?

    PubMed

    Uzdrowska, Marta; Crossland, Michael; Broniarczyk-Loba, Anna

    2014-01-01

    In someone with good vision, binocular vision provides benefits which could not be obtained by monocular viewing only. People with visual impairment often have abnormal binocularity. However, they often use both eyes simultaneously in their everyday activities. Much remains to be known about binocular vision in people with visual impairment. As the binocular status of people with low vision strongly influences their treatment and rehabilitation, it should be evaluated and considered before diagnosis and further recommendations.

  19. Optics system design applying a micro-prism array of a single lens stereo image pair.

    PubMed

    Chen, Chien-Yue; Yang, Ting-Ting; Sun, Wen-Shing

    2008-09-29

    In this study we apply a micro-prism array technique to enable a single lens CCD to capture a stereo image for the simulation of double lens vision. A micro-prism array plate serves as the basis for design, which also improves the lightweight and portability of the overall system in addition to lowering the mass-production costs. Most important of all, this design possesses the characteristics of integration compatibility between general-purpose and video camera.

  20. New Views of the Sun: STEREO and Hinode

    NASA Astrophysics Data System (ADS)

    Luhmann, Janet G.; Tsuneta, Saku; Bougeret, J.-L.; Galvin, Antoinette; Howard, R. A.; Kaiser, Michael; Thompson, W. T.

    The twin-spacecraft STEREO mission has now been in orbit for 1.5 years. Although the main scientific objective of STEREO is the origin and evolution of Coronal Mass Ejections (CMEs) and their heliospheric consequences, the slow decline of the previous solar cycle has provided an extraordinary opportunity for close scrutiny of the quiet corona and solar wind, including suprathermal and energetic particles. However, STEREO has also captured a few late cycle CMEs that have given us a taste of the observations and analyses to come. Images from the SECCHI investigation afforded by STEREO's separated perspectives and the heliospheric imager have already allowed us to visibly witness the origins of the slow solar wind and the Sun-to-1 AU transit of ICMEs. The SWAVES investigation has monitored the transit of interplanetary shocks in 3D while the PLASTIC and IMPACT in-situ measurements provide the 'ground truth' of what is remotely sensed. New prospects for space weather forecasting have been demonstrated with the STEREO behind spacecraft, a successful proof-of-concept test for future space weather mission designs. The data sets for the STEREO investigations are openly available through a STEREO Science Center web interface that also provides supporting information for potential users from all communities. Comet observers and astronomers, interplanetary dust researchers and planetary scientists have already made use of this resource. The potential for detailed Sun-to-Earth CME/ICME interpretations with sophisticated modeling efforts are an upcoming STEREO-Hinode partnering activity whose success we can only anticipate at this time. Since its launch in September 2006, Hinode has sent back solar images of unprecedented clarity every day. The primary purpose of this mission is a systems approach to understanding the generation, transport and ultimate dissipation of solar magnetic fields with a well-coordinated set of advanced telescopes. Hinode is equipped with three

  1. Practical intraoperative stereo camera calibration.

    PubMed

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  2. Low Vision Aids and Low Vision Rehabilitation

    MedlinePlus

    ... Vision Resources Low Vision Rehabilitation and Low Vision Aids Written by: David Turbert Edited by: Robert H ... covers most services, but not devices.) Low vision aids There are many low vision aids and devices ...

  3. Three-dimensional stereo by photometric ratios

    SciTech Connect

    Wolff, L.B.; Angelopoulou, E.

    1994-11-01

    We present a methodology for corresponding a dense set of points on an object surface from photometric values for three-dimensional stereo computation of depth. The methodology utilizes multiple stereo pairs of images, with each stereo pair being taken of the identical scene but under different illumination. With just two stereo pairs of images taken under two different illumination conditions, a stereo pair of ratio images can be produced, one for the ratio of left-hand images and one for the ratio of right-hand images. We demonstrate how the photometric ratios composing these images can be used for accurate correspondence of object points. Object points having the same photometric ratio with respect to two different illumination conditions constitute a well-defined equivalence class of physical constraints defined by local surface orientation relative to illumination conditions. We formally show that for diffuse reflection the photometric ratio is invariant to varying camera characteristics, surface albedo, and viewpoint and that therefore the same photometric ratio in both images of a stereo pair implies the same equivalence class of physical constraints. The correspondence of photometric ratios along epipolar lines in a stereo pair of images under different illumination conditions is a correspondence of equivalent physical constraints, and the determination of depth from stereo can be performed. Whereas illumination planning is required, our photometric-based stereo methodology does not require knowledge of illumination conditions in the actual computation of three-dimensional depth and is applicable to perspective views. This technique extends the stereo determination of three-dimensional depth to smooth featureless surfaces without the use of precisely calibrated lighting. We demonstrate experimental depth maps from a dense set of points on smooth objects of known ground-truth shape, determined to within 1% depth accuracy.

  4. Operational Based Vision Assessment Research: Depth Perception

    DTIC Science & Technology

    2014-11-01

    quantify depth perception , including the Armed Forces Vision Tester (AFVT) stereopsis test, AO Vectograph, Verhoeff, and Howard-Dolman (HD). Most of these...tests are tests of stereopsis, such as the AFVT and AO Vectograph. Others evaluate depth perception with stereo as a contributor to performance, such...as the HD. The USAF and USN maintain depth perception standards for pilots and other aircrew with scanner duty (e.g., aerial refueling operators

  5. Automatic harvesting of asparagus: an application of robot vision to agriculture

    NASA Astrophysics Data System (ADS)

    Grattoni, Paolo; Cumani, Aldo; Guiducci, Antonio; Pettiti, Giuseppe

    1994-02-01

    This work presents a system for the automatic selective harvesting of asparagus in open field being developed in the framework of the Italian National Project on Robotics. It is composed of a mobile robot, equipped with a suitable manipulator, and driven by a stereo-vision module. In this paper we discuss in detail the problems related to the vision module.

  6. 3D vision upgrade kit for TALON robot

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  7. Vision and Motion Pictures.

    ERIC Educational Resources Information Center

    Grambo, Gregory

    1998-01-01

    Presents activities on persistence of vision that involve students in a hands-on approach to the study of early methods of creating motion pictures. Students construct flip books, a Zoetrope, and an early movie machine. (DDR)

  8. Low Vision FAQs

    MedlinePlus

    ... USAJobs Home > Low Vision > Low Vision FAQs Healthy Vision Diabetes Diabetes Home How Much Do You Know? ... los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is ...

  9. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    NASA Astrophysics Data System (ADS)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  10. Toward a pyramidal neural network system for stereo fusion

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Poussart, Denis

    1992-03-01

    A goal of computer vision is the construction of scene descriptions based on information extracted from one or more 2-D images. Stereo is one of the strategies used to recover 3-D information from two images. Intensity edges in the images correspond mostly to characteristic features in the 3-D scene and the stereo module attempt to match corresponding features in the two images. Edge detection makes explicit important information about the two-dimensional image but is scale-dependent: edges are visible only over a range of scales. One needs multiple scale analysis of the input image in order to have a complete description of the edges. We propose a compact pyramidal architecture for image representation at multiple spatial scales. A simple Processing Element (PE) is allocated at each pixel location at each level of the pyramid. A dense network of weighted links between each PE and PEs underneath is programmed to generate the levels of the pyramid. Lateral weighted links within a level compute edge localization and intensity gradient. Feedback between successive levels is used to reinforce and refine the position of true edges. A fusion channel matches the two edge channels to output a disparity map of the observed scene.

  11. Perspective photometric stereo beyond Lambert

    NASA Astrophysics Data System (ADS)

    Khanian, Maryam; Sharifi Boroujerdi, Ali; Breuß, Michael

    2015-04-01

    Photometric stereo is a technique for estimating the 3-D depth of a surface using multiple images taken under different illuminations from the same viewing angle. Most existing models make use of Lambertian reflection and an orthographic camera as underlying assumptions. However, real-world materials often exhibit non-Lambertian effects such as specular highlights and for many applications it is of interest to consider objects close to the camera. In our work, we aim at addressing these issues. Together with a perspective camera we employ a non-Lambertian reflectance model, namely the Blinn-Phong model which is capable to deal with specular reflection. Focusing on the effects of specular highlights, we perform a detailed study of one-dimensional test cases showing important aspects of our method.

  12. Perception of difficulties with vision-related activities of daily living among patients undergoing unilateral posterior capsulotomy

    PubMed Central

    de Senne, Firmani M. B.; Temporini, Edméa R.; Arieta, Carlos E. L.; Pacheco, Karla D.

    2010-01-01

    OBJECTIVES To assess the influence of Nd:YAG (neodymium: yttrium-aluminum- garnet) laser unilateral posterior capsulotomy on visual acuity and patients’ perception of difficulties with vision-related activities of daily life. METHODS We conducted an interventional survey that included 48 patients between 40 and 80 years of age with uni- or bilateral pseudophakia, posterior capsule opacification, and visual acuity ≤0.30 (logMAR) in one eye who were seen at a Brazilian university hospital. All patients underwent posterior capsulotomy using an Nd:YAG laser. Before and after the intervention, patients were asked to complete a questionnaire that was developed in an exploratory study. RESULTS Before posterior capsulotomy, the median visual acuity (logMAR) of the included patients was 0.52 (range 0.30–1.60). After posterior capsulotomy, the median visual acuity of the included patients improved to 0.10 (range 0.0–0.52). According to the subjects’ perceptions, their ability to perform most of their daily life activities improved after the intervention (p<0.05). CONCLUSIONS After patients underwent posterior capsulotomy with an Nd:YAG laser, a significant improvement in the visual acuity of the treated eye was observed. Additionally, subjects felt that they experienced less difficulty performing most of their vision-dependent activities of daily living. PMID:20535363

  13. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  14. Opportunity at 'Cook Islands' (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11854 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11854

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,825th Martian day, or sol, of Opportunity's surface mission (March 12, 2009). North is at the top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven half a meter (1.5 feet) earlier on Sol 1825 to fine-tune its location for placing its robotic arm onto an exposed patch of outcrop including a target area informally called 'Cook Islands.' On the preceding sol, Opportunity turned around to drive frontwards and then drove 4.5 meters (15 feet) toward this outcrop. The tracks from the SOl 1824 drive are visible near the center of this view at about the 11 o'clock position. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Opportunity had previously been driving backward as a strategy to redistribute lubrication in a wheel drawing more electrical current than usual.

    The outcrop exposure that includes 'Cook Islands' is visible just below the center of the image.

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  15. Phoenix Lander on Mars (Stereo)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Phoenix Mars Lander monitors the atmosphere overhead and reaches out to the soil below in this stereo illustration of the spacecraft fully deployed on the surface of Mars. The image appears three-dimensional when viewed through red-green stereo glasses.

    Phoenix has been assembled and tested for launch in August 2007 from Cape Canaveral Air Force Station, Fla., and for landing in May or June 2008 on an arctic plain of far-northern Mars. The mission responds to evidence returned from NASA's Mars Odyssey orbiter in 2002 indicating that most high-latitude areas on Mars have frozen water mixed with soil within arm's reach of the surface.

    Phoenix will use a robotic arm to dig down to the expected icy layer. It will analyze scooped-up samples of the soil and ice for factors that will help scientists evaluate whether the subsurface environment at the site ever was, or may still be, a favorable habitat for microbial life. The instruments on Phoenix will also gather information to advance understanding about the history of the water in the icy layer. A weather station on the lander will conduct the first study Martian arctic weather from ground level.

    The vertical green line in this illustration shows how the weather station on Phoenix will use a laser beam from a lidar instrument to monitor dust and clouds in the atmosphere. The dark 'wings' to either side of the lander's main body are solar panels for providing electric power.

    The Phoenix mission is led by Principal Investigator Peter H. Smith of the University of Arizona, Tucson, with project management at NASA's Jet Propulsion Laboratory and development partnership with Lockheed Martin Space Systems, Denver. International contributions for Phoenix are provided by the Canadian Space Agency, the University of Neuchatel (Switzerland), the University of Copenhagen (Denmark), the Max Planck Institute (Germany) and the Finnish Meteorological institute. JPL is a division of the California

  16. Recovery of stereo acuity in adults with amblyopia

    PubMed Central

    Astle, Andrew T; McGraw, Paul V; Webb, Ben S

    2011-01-01

    Disruption of visual input to one eye during early development leads to marked functional impairments of vision, commonly referred to as amblyopia. A major consequence of amblyopia is the inability to encode binocular disparity information leading to impaired depth perception or stereo acuity. If amblyopia is treated early in life (before 4 years of age), then recovery of normal stereoscopic function is possible. Treatment is rarely undertaken later in life (adulthood) because declining levels of neural plasticity are thought to limit the effectiveness of standard treatments. Here, the authors show that a learning-based therapy, designed to exploit experience-dependent plastic mechanisms, can be used to recover stereoscopic visual function in adults with amblyopia. These cases challenge the long-held dogma that the critical period for visual development and the window for treating amblyopia are one and the same. PMID:22707543

  17. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  18. Three-dimensional model acquisition using rotational stereo and image focus analysis

    NASA Astrophysics Data System (ADS)

    Lin, Huei-Yung; Subbarao, Murali

    2001-02-01

    We present a digital vision system for acquiring the complete 3D model of an object from multiple views. The system uses image focus analysis to obtain a rough 3D shape of each view of an object and also the corresponding focused image or texture map. The rough 3D shape is used in a rotational stereo algorithm to obtain a more accurate measurement of 3D shape. The rotational stereo involves rotating the object by a small angle to obtain stereo images. It offers some important advantages compared to conventional stereo. A single camera is used instead of two, the stereo matching is easier as the field of view remains the same for the camera (but the object is rotated), and camera calibration is easier since a single stationary camera is used. The 3D shape and the corresponding texture map are measured for 4 views of the object at 90 degree angular intervals. These partial shapes and texture maps are integrated to obtain a complete 360 degree model of the object. The theory and algorithms underlying rotational-stereo and integration of partial 3D models are presented. The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300mm x 300mm x 300mm volume placed about 600 mm from the camera. The complete model is displayed using a 3D graphics rendering software (Apple's QuickDraw 3D Viewer). Both computational algorithms and experimental results on several objects are presented.

  19. Aug 1 Solar Event From STEREO Ahead

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft leading the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extreme ultr...

  20. Aug 1 Solar Event From STEREO Behind

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft trailing behind the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extr...

  1. STEREO Witnesses Aug 1, 2010 Solar Event

    NASA Video Gallery

    These image sequences were taken by the twin STEREO spacecraft looking at the Sun from opposite sides. The bottom pair shows the Sun and its immediate surroundings. The top row shows events from th...

  2. Artificial stereo presentation of meteorological data fields

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Desjardins, M.; Negri, A. J.

    1981-01-01

    The innate capability to perceive three-dimensional stereo imagery has been exploited to present multidimensional meteorological data fields. Variations on an artificial stereo technique first discussed by Pichel et al. (1973) are used to display single and multispectral images in a vivid and easily assimilated manner. Examples of visible/infrared artificial stereo are given for Hurricane Allen and for severe thunderstorms on 10 April 1979. Three-dimensional output from a mesoscale model also is presented. The images may be viewed through the glasses inserted in the February 1981 issue of the Bulletin of the American Meteorological Society, with the red lens over the right eye. The images have been produced on the interactive Atmospheric and Oceanographic Information Processing System (AOIPS) at Goddard Space Flight Center. Stereo presentation is an important aid in understanding meteorological phenomena for operational weather forecasting, research case studies, and model simulations.

  3. Solar Coronal Cells as Seen by STEREO

    NASA Video Gallery

    The changes of a coronal cell region as solar rotation carries it across the solar disk as seen with NASA's STEREO-B spacecraft. The camera is fixed on the region (panning with it) and shows the pl...

  4. STEREO as a "Planetary Hazards" Mission

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Thompson, B. J.

    2014-01-01

    NASA's twin STEREO probes, launched in 2006, have advanced the art and science of space weather forecasting more than any other spacecraft or solar observatory. By surrounding the Sun, they provide previously-impossible early warnings of threats approaching Earth as they develop on the solar far side. They have also revealed the 3D shape and inner structure of CMEs-massive solar storms that can trigger geomagnetic storms when they collide with Earth. This improves the ability of forecasters to anticipate the timing and severity of such events. Moreover, the unique capability of STEREO to track CMEs in three dimensions allows forecasters to make predictions for other planets, giving rise to the possibility of interplanetary space weather forecasting too. STEREO is one of those rare missions for which "planetary hazards" refers to more than one world. The STEREO probes also hold promise for the study of comets and potentially hazardous asteroids.

  5. STEREO Observations of Solar Energetic Particles

    NASA Technical Reports Server (NTRS)

    vonRosenvinge, Tycho; Christian, Eric; Cohen, Christina; Leske, Richard; Mewaldt, Richard; Stone, Edward; Wiedenbeck, Mark

    2011-01-01

    We report on observations of Solar Energetic Particle (SEP) events as observed by instruments on the STEREO Ahead and Behind spacecraft and on the ACE spacecraft. We will show observations of an electron event observed by the STEREO Ahead spacecraft on June 12, 2010 located at W74 essentially simultaneously with electrons seen at STEREO Behind at E70. Some similar events observed by Helios were ascribed to fast electron propagation in longitude close to the sun. We will look for independent verification of this possibility. We will also show observations of what appears to be a single proton event with very similar time-history profiles at both of the STEREO spacecraft at a similar wide separation. This is unexpected. We will attempt to understand all of these events in terms of corresponding CME and radio burst observations.

  6. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  7. Parallel vision algorithms. Annual technical report No. 1, 1 October 1986-30 September 1987

    SciTech Connect

    Ibrahim, H.A.; Kender, J.R.; Brown, L.G.

    1987-10-01

    The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms will immediately serve as front-ends for autonomous land vehicle navigation systems. During the first year of the project, efforts have concentrated on two fronts. First, developing and testing the parallel programming environment that will be used to develop, implement and test the parallel vision algorithms. Second, developing and testing multi-resolution stereo, and texture algorithms. This report describes the status and progress on these two fronts. The authors describe first the programming environment developed, and mapping scheme that allows efficient use of the connection machine for pyramid (multi-resolution) algorithms. Second, they present algorithms and test results for multi-resolution stereo, and texture algorithms. Also the initial results of the starting efforts of integrating stereo and texture algorithms are presented.

  8. Impact on stereo-acuity of two presbyopia correction approaches: monovision and small aperture inlay.

    PubMed

    Fernández, Enrique J; Schwarz, Christina; Prieto, Pedro M; Manzanera, Silvestre; Artal, Pablo

    2013-06-01

    Some of the different currently applied approaches that correct presbyopia may reduce stereovision. In this work, stereo-acuity was measured for two methods: (1) monovision and (2) small aperture inlay in one eye. When performing the experiment, a prototype of a binocular adaptive optics vision analyzer was employed. The system allowed simultaneous measurement and manipulation of the optics in both eyes of a subject. The apparatus incorporated two programmable spatial light modulators: one phase-only device using liquid crystal on silicon technology for wavefront manipulation and one intensity modulator for controlling the exit pupils. The prototype was also equipped with a stimulus generator for creating retinal disparity based on two micro-displays. The three-needle test was programmed for characterizing stereo-acuity. Subjects underwent a two-alternative forced-choice test. The following cases were tested for the stimulus placed at distance: (a) natural vision; (b) 1.5 D monovision; (c) 0.75 D monovision; (d) natural vision and small pupil; (e) 0.75 D monovision and small pupil. In all cases the standard pupil diameter was 4 mm and the small pupil diameter was 1.6 mm. The use of a small aperture significantly reduced the negative impact of monovision on stereopsis. The results of the experiment suggest that combining micro-monovision with a small aperture, which is currently being implemented as a corneal inlay, can yield values of stereoacuity close to those attained under normal binocular vision.

  9. Impact on stereo-acuity of two presbyopia correction approaches: monovision and small aperture inlay

    PubMed Central

    Fernández, Enrique J.; Schwarz, Christina; Prieto, Pedro M.; Manzanera, Silvestre; Artal, Pablo

    2013-01-01

    Some of the different currently applied approaches that correct presbyopia may reduce stereovision. In this work, stereo-acuity was measured for two methods: (1) monovision and (2) small aperture inlay in one eye. When performing the experiment, a prototype of a binocular adaptive optics vision analyzer was employed. The system allowed simultaneous measurement and manipulation of the optics in both eyes of a subject. The apparatus incorporated two programmable spatial light modulators: one phase-only device using liquid crystal on silicon technology for wavefront manipulation and one intensity modulator for controlling the exit pupils. The prototype was also equipped with a stimulus generator for creating retinal disparity based on two micro-displays. The three-needle test was programmed for characterizing stereo-acuity. Subjects underwent a two-alternative forced-choice test. The following cases were tested for the stimulus placed at distance: (a) natural vision; (b) 1.5 D monovision; (c) 0.75 D monovision; (d) natural vision and small pupil; (e) 0.75 D monovision and small pupil. In all cases the standard pupil diameter was 4 mm and the small pupil diameter was 1.6 mm. The use of a small aperture significantly reduced the negative impact of monovision on stereopsis. The results of the experiment suggest that combining micro-monovision with a small aperture, which is currently being implemented as a corneal inlay, can yield values of stereoacuity close to those attained under normal binocular vision. PMID:23761846

  10. Stereo transparency in ambiguous stereograms generated by overlapping two identical dot patterns.

    PubMed

    Watanabe, Osamu

    2009-11-30

    In binocular vision, observers can perceive transparent surfaces by fusing a stereogram composed of two overlapping patterns with different disparities. When dot patterns of two surfaces are identical, the stereogram has potential matches leading to both transparency and non-transparency (or unitary surface) perceptions. However, these two matching candidates are exclusive if the uniqueness assumption holds. This stereogram can be regarded as a random-dot version of the double-nail illusion and a stereo version of the locally paired-dot stimulus that was used to investigate the neural mechanism for motion transparency. Which surface is perceived in this ambiguous stereogram would reflect the property of the transparency detection mechanism in human stereopsis. Here we perform a parametric study to examine the perceptual property in this ambiguous stereogram. The result showed that the ability in transparency detection from this stereogram is determined by the contrast reversal ratio between overlapping patterns within small regions the width of which was about 0.4 deg. The width was similar to the receptive field sizes of neurons in striate cortex. The result suggests that the contrast reversal between two identical patterns would modulate activities of binocular neurons, and this modification gives a crucial effect on the neural representation for overlapping disparities.

  11. Digital elevation modelling using ASTER stereo imagery.

    PubMed

    Forkuo, Eric Kwabena

    2010-04-01

    Digital elevation model (DEM) in recent times has become an integral part of national spatial data infrastructure of many countries world-wide due to its invaluable importance. Although DEMs are mostly generated from contours maps, stereo aerial photographs and air-borne and terrestrial laser scanning, the stereo interpretation and auto-correlation from satellite image stereo-pairs such as with SPOT, IRS, and relatively new ASTER imagery is also an effective means of producing DEM data. In this study, terrain elevation data were derived by applying photogrammetric process to ASTER stereo imagery. Also, the quality ofDEMs produced from ASTER stereo imagery was analysed by comparing it with DEM produced from topographic map at a scale of 1:50,000. While analyzing the vertical accuracy of the generated ASTER DEM, fifty ground control points were extracted from the map and overlaid on the DEM. Results indicate that a root-mean-square error in elevation of +/- 14 m was achieved with ASTER stereo image data of good quality. The horizontal accuracy obtained from the ground control points was 14.77, which is within the acceptable range of +/- 7m to +/- 25 m. The generated (15 m) DEM was compared with a 20m, 25m, and a 30 m pixel DEM to the original map. In all, the results proved that, the 15 m DEM conform to the original map DEM than the others. Overall, this analysis proves that, the generated digital terrain model, DEM is acceptable.

  12. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  13. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps.

    PubMed

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-08-21

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel's scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other "fused" algorithms in the aspect of precision.

  14. Calibration of stereo-digital image correlation for deformation measurement of large engineering components

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Dai, Xiangjun; Chen, Zhenning; Dai, Yuntong; Dong, Shuai; He, Xiaoyuan

    2016-12-01

    The development of stereo-digital image correlation (stereo-DIC) enables the application of vision-based technique that uses digital cameras to the deformation measurement of materials and structures. Compared with traditional contact measurements, the stereo-DIC technique allows for non-contact measurement, has a non-intrusive characteristic, and can obtain full-field deformation information. In this paper, a speckle-based calibration method is developed to calibrate the stereo-DIC system when the system is applied for deformation measurement of large engineering components. By combining speckle analysis with the classical relative orientation algorithm, relative rotation and translation between cameras can be calibrated based on analysis of experimental speckle images. For validation, the strain fields of a four-point bending beam and an axially loaded concrete column were determined by the proposed calibration method and stereovision measurement. As a practical application, the proposed calibration method was applied for strain measurement of a ductile iron cylindrical vessel in the drop test. The measured results verify that the proposed calibration method is effective for deformation measurement of large engineering components.

  15. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  16. Hybrid Image-Plane/Stereo Manipulation

    NASA Technical Reports Server (NTRS)

    Baumgartner, Eric; Robinson, Matthew

    2004-01-01

    Hybrid Image-Plane/Stereo (HIPS) manipulation is a method of processing image data, and of controlling a robotic manipulator arm in response to the data, that enables the manipulator arm to place an end-effector (an instrument or tool) precisely with respect to a target (see figure). Unlike other stereoscopic machine-vision-based methods of controlling robots, this method is robust in the face of calibration errors and changes in calibration during operation. In this method, a stereoscopic pair of cameras on the robot first acquires images of the manipulator at a set of predefined poses. The image data are processed to obtain image-plane coordinates of known visible features of the end-effector. Next, there is computed an initial calibration in the form of a mapping between (1) the image-plane coordinates and (2) the nominal three-dimensional coordinates of the noted end-effector features in a reference frame fixed to the main robot body at the base of the manipulator. The nominal three-dimensional coordinates are obtained by use of the nominal forward kinematics of the manipulator arm that is, calculated by use of the currently measured manipulator joint angles and previously measured lengths of manipulator arm segments under the assumption that the arm segments are rigid, that the arm lengths are constant, and that there is no backlash. It is understood from the outset that these nominal three-dimensional coordinates are likely to contain possibly significant calibration errors, but the effects of the errors are progressively reduced, as described next. As the end-effector is moved toward the target, the calibration is updated repeatedly by use of data from newly acquired images of the end-effector and of the corresponding nominal coordinates in the manipulator reference frame. By use of the updated calibration, the coordinates of the target are computed in manipulator-reference-frame coordinates and then used to the necessary manipulator joint angles to position

  17. STEREO interplanetary shocks and foreshocks

    SciTech Connect

    Blanco-Cano, X.; Kajdic, P.; Aguilar-Rodriguez, E.; Russell, C. T.; Jian, L. K.; Luhmann, J. G.

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  18. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  19. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  20. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  1. All Vision Impairment

    MedlinePlus

    ... Home > Statistics and Data > All Vision Impairment All Vision Impairment Vision Impairment Defined Vision impairment is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for ...

  2. Improving Vision

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Many people are familiar with the popular science fiction series Star Trek: The Next Generation, a show featuring a blind character named Geordi La Forge, whose visor-like glasses enable him to see. What many people do not know is that a product very similar to Geordi's glasses is available to assist people with vision conditions, and a NASA engineer's expertise contributed to its development. The JORDY(trademark) (Joint Optical Reflective Display) device, designed and manufactured by a privately-held medical device company known as Enhanced Vision, enables people with low vision to read, write, and watch television. Low vision, which includes macular degeneration, diabetic retinopathy, and glaucoma, describes eyesight that is 20/70 or worse, and cannot be fully corrected with conventional glasses.

  3. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  4. [Central vision].

    PubMed

    Fahle, M

    2004-07-01

    The clinical assessment of vision by means of optotypes does by no means test just two-point resolution, since a correct naming of the letters or digits requires a preceding visual object recognition. Cortical lesions can massively deteriorate vision up to a "Seelenblindheit" in spite of intact optics and retina. There are different processing levels involved in the analysis which can be individually defective, leading to disorders from visual indiscrimination to agnosia or anomia.

  5. Retina-specific activation of a sustained hypoxia-like response leads to severe retinal degeneration and loss of vision.

    PubMed

    Lange, Christina; Caprara, Christian; Tanimoto, Naoyuki; Beck, Susanne; Huber, Gesine; Samardzija, Marijana; Seeliger, Mathias; Grimm, Christian

    2011-01-01

    Loss of vision and blindness in human patients is often caused by the degeneration of neuronal cells in the retina. In mouse models, photoreceptors can be protected from death by hypoxic preconditioning. Preconditioning in low oxygen stabilizes and activates hypoxia inducible transcription factors (HIFs), which play a major role in the hypoxic response of tissues including the retina. We show that a tissue-specific knockdown of von Hippel-Lindau protein (VHL) activated HIF transcription factors in normoxic conditions in the retina. Sustained activation of HIF1 and HIF2 was accompanied by persisting embryonic vasculatures in the posterior eye and the iris. Embryonic vessels persisted into adulthood and led to a severely abnormal mature vessel system with vessels penetrating the photoreceptor layer in adult mice. The sustained hypoxia-like response also activated the leukemia inhibitory factor (LIF)-controlled endogenous molecular cell survival pathway. However, this was not sufficient to protect the retina against massive cell death in all retinal layers of adult mice. Caspases 1, 3 and 8 were upregulated during the degeneration as were several VHL target genes connected to the extracellular matrix. Misregulation of these genes may influence retinal structure and may therefore facilitate growth of vessels into the photoreceptor layer. Thus, an early and sustained activation of a hypoxia-like response in retinal cells leads to abnormal vasculature and severe retinal degeneration in the adult mouse retina.

  6. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  7. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  8. Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array

    NASA Astrophysics Data System (ADS)

    Houben, Sebastian

    2015-03-01

    The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.

  9. Fractional stereo matching using expectation-maximization.

    PubMed

    Xiong, Wei; Chung, Hin Shun; Jia, Jiaya

    2009-03-01

    In our fractional stereo matching problem, a foreground object with a fractional boundary is blended with a background scene using unknown transparencies. Due to the spatially varying disparities in different layers, one foreground pixel may be blended with different background pixels in stereo images, making the color constancy commonly assumed in traditional stereo matching not hold any more. To tackle this problem, in this paper, we introduce a probabilistic framework constraining the matching of pixel colors, disparities, and alpha values in different layers, and propose an automatic optimization method to solve a Maximizing a Posterior (MAP) problem using Expectation-Maximization (EM), given only a short-baseline stereo input image pair. Our method encodes the effect of background occlusion by layer blending without requiring a special detection process. The alpha computation process in our unified framework can be regarded as a new approach by natural image matting, which handles appropriately the situation when the background color is similar to that of the foreground object. We demonstrate the efficacy of our method by experimenting with challenging stereo images and making comparisons with state-of-the-art methods.

  10. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  11. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.

    PubMed

    Vu, Dung T; Chidester, Benjamin; Yang, Hongsheng; Do, Minh N; Lu, Jiangbo

    2014-08-01

    Estimating dense correspondence or depth information from a pair of stereoscopic images is a fundamental problem in computer vision, which finds a range of important applications. Despite intensive past research efforts in this topic, it still remains challenging to recover the depth information both reliably and efficiently, especially when the input images contain weakly textured regions or are captured under uncontrolled, real-life conditions. Striking a desired balance between computational efficiency and estimation quality, a hybrid minimum spanning tree-based stereo matching method is proposed in this paper. Our method performs efficient nonlocal cost aggregation at pixel-level and region-level, and then adaptively fuses the resulting costs together to leverage their respective strength in handling large textureless regions and fine depth discontinuities. Experiments on the standard Middlebury stereo benchmark show that the proposed stereo method outperforms all prior local and nonlocal aggregation-based methods, achieving particularly noticeable improvements for low texture regions. To further demonstrate the effectiveness of the proposed stereo method, also motivated by the increasing desire to generate expressive depth-induced photo effects, this paper is tasked next to address the emerging application of interactive depth-of-field rendering given a real-world stereo image pair. To this end, we propose an accurate thin-lens model for synthetic depth-of-field rendering, which considers the user-stroke placement and camera-specific parameters and performs the pixel-adapted Gaussian blurring in a principled way. Taking ~1.5 s to process a pair of 640×360 images in the off-line step, our system named Scribble2focus allows users to interactively select in-focus regions by simple strokes using the touch screen and returns the synthetically refocused images instantly to the user.

  12. Vector lifting schemes for stereo image coding.

    PubMed

    Kaaniche, Mounir; Benazza-Benyahia, Amel; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2009-11-01

    Many research efforts have been devoted to the improvement of stereo image coding techniques for storage or transmission. In this paper, we are mainly interested in lossy-to-lossless coding schemes for stereo images allowing progressive reconstruction. The most commonly used approaches for stereo compression are based on disparity compensation techniques. The basic principle involved in this technique first consists of estimating the disparity map. Then, one image is considered as a reference and the other is predicted in order to generate a residual image. In this paper, we propose a novel approach, based on vector lifting schemes (VLS), which offers the advantage of generating two compact multiresolution representations of the left and the right views. We present two versions of this new scheme. A theoretical analysis of the performance of the considered VLS is also conducted. Experimental results indicate a significant improvement using the proposed structures compared with conventional methods.

  13. [Evaluation of condition and factors affecting activity effectiveness and visual performance of pilots who use night vision goggles during the helicopter flights].

    PubMed

    Aleksandrov, A S; Davydov, V V; Lapa, V V; Minakov, A A; Sukhanov, V V; Chistov, S D

    2014-07-01

    According to analysis of questionnaire authors revealed factors, which affect activity effectiveness, and visual performance of pilots who use night vision goggles during the helicopter flights. These are: difficulty of flight tasks, flying conditions, illusion of attitude. Authors gave possible ways to reduce an impact of these factors.

  14. Community Vision and Interagency Alignment: A Community Planning Process to Promote Active Transportation.

    PubMed

    DeGregory, Sarah Timmins; Chaudhury, Nupur; Kennedy, Patrick; Noyes, Philip; Maybank, Aletha

    2016-04-01

    In 2010, the Brooklyn Active Transportation Community Planning Initiative launched in 2 New York City neighborhoods. Over a 2-year planning period, residents participated in surveys, school and community forums, neighborhood street assessments, and activation events-activities that highlighted the need for safer streets locally. Consensus among residents and key multisectoral stakeholders, including city agencies and community-based organizations, was garnered in support of a planned expansion of bicycling infrastructure. The process of building on community assets and applying a collective impact approach yielded changes in the built environment, attracted new partners and resources, and helped to restore a sense of power among residents.

  15. When a photograph can be heard: vision activates the auditory cortex within 110 ms.

    PubMed

    Proverbio, Alice Mado; D'Aniello, Guido Edoardo; Adorni, Roberta; Zani, Alberto

    2011-01-01

    As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after approximately 200 ms.

  16. Photometric stereo sensor for robot-assisted industrial quality inspection of coated composite material surfaces

    NASA Astrophysics Data System (ADS)

    Weigl, Eva; Zambal, Sebastian; Stöger, Matthias; Eitzinger, Christian

    2015-04-01

    While composite materials are increasingly used in modern industry, the quality control in terms of vision-based surface inspection remains a challenging task. Due to the often complex and three-dimensional structures, a manual inspection of these components is nearly impossible. We present a photometric stereo sensor system including an industrial robotic arm for positioning the sensor relative to the inspected part. Two approaches are discussed: stop-and-go positioning and continuous positioning. Results are presented on typical defects that appear on various composite material surfaces in the production process.

  17. Presidential Visions.

    ERIC Educational Resources Information Center

    Gallin, Alice, Ed.

    1992-01-01

    This journal issue is devoted to the theme of university presidents and their visions of the future. It presents the inaugural addresses and speeches of 16 Catholic college and university presidents focusing on their goals, ambitions, and reasons for choosing to become higher education leaders at this particular time in the history of education in…

  18. Visions 2001.

    ERIC Educational Resources Information Center

    Rivero, Victor; Norman, Michele

    2001-01-01

    Reports on the views of 18 educational leaders regarding their vision on the future of education in an information age. Topics include people's diverse needs; relationships between morality, ethics, values, and technology; leadership; parental involvement; online courses from multiple higher education institutions; teachers' role; technology…

  19. STEREO Captures Fastest CME to Date

    NASA Video Gallery

    This movie shows a coronal mass ejection (CME) on the sun from July 22, 2012 at 10:00 PM EDT until 2 AM on July 23 as captured by NASA's Solar TErrestrial RElations Observatory-Ahead (STEREO-A). Be...

  20. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  1. Multiflash stereopsis: depth-edge-preserving stereo with small baseline illumination.

    PubMed

    Feris, Rogerio; Raskar, Ramesh; Chen, Longbin; Tan, Karhan; Turk, Matthew

    2008-01-01

    Traditional stereo matching algorithms are limited in their ability to produce accurate results near depth discontinuities, due to partial occlusions and violation of smoothness constraints. In this paper, we use small baseline multi-flash illumination to produce a rich set of feature maps that enable acquisition of discontinuity preserving point correspondences. First, from a single multi-flash camera, we formulate a qualitative depth map using a gradient domain method that encodes object relative distances. Then, in a multiview setup, we exploit shadows created by light sources to compute an occlusion map. Finally, we demonstrate the usefulness of these feature maps by incorporating them into two different dense stereo correspondence algorithms, the first based on local search and the second based on belief propagation. Experimental results show that our enhanced stereo algorithms are able to extract high quality, discontinuity preserving correspondence maps from scenes that are extremely challenging for conventional stereo methods. We also demonstrate that small baseline illumination can be useful to handle specular reflections in stereo imagery. Different from most existing active illumination techniques, our method is simple, inexpensive, compact, and requires no calibration of light sources.

  2. Healthy Vision Tips

    MedlinePlus

    ... NEI for Kids > Healthy Vision Tips All About Vision About the Eye Ask a Scientist Video Series ... Links to More Information Optical Illusions Printables Healthy Vision Tips Healthy vision starts with you! Use these ...

  3. Blindness and vision loss

    MedlinePlus

    ... eye ( chemical burns or sports injuries) Diabetes Glaucoma Macular degeneration The type of partial vision loss may differ, ... tunnel vision and missing areas of vision With macular degeneration, the side vision is normal but the central ...

  4. Girls' Sports and Physical Activities in the Community: An Inclusive Vision for the New Millennium.

    ERIC Educational Resources Information Center

    Varpalotai, Aniko; Doherty, Alison

    The Gender Equity in Recreation Services Policy for the City of London (Ontario, Canada, November 1996) was the first municipal policy of its kind in Canada. It followed the development of the Sport Canada Policy on Women in Sport and the Ontario Policy on Full and Fair Access for Women and Girls in Sport and Physical Activity. It resulted from…

  5. Multispectral photometric stereo for acquiring high-fidelity surface normals.

    PubMed

    Nam, Giljoo; Kim, Min H

    2014-01-01

    Multispectral imaging and photometric stereo are common in 3D imaging but rarely have been combined. Reconstructing a 3D object's shape using photometric stereo is challenging owing to indirect illumination, specular reflection, and self-shadows, and removing interreflection in photometric stereo is problematic. A new multispectral photometric-stereo method removes interreflection on diffuse materials using multispectral-reflectance information and reconstructs 3D shapes with high accuracy. You can integrate this method into photometric-stereo systems by simply substituting the original camera with a multispectral camera.

  6. Bayesian Stereo Matching Method Based on Edge Constraints.

    PubMed

    Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui

    2012-12-01

    A new global stereo matching method is presented that focuses on the handling of disparity, discontinuity and occlusion. The Bayesian approach is utilized for dense stereo matching problem formulated as a maximum a posteriori Markov Random Field (MAP-MRF) problem. In order to improve stereo matching performance, edges are incorporated into the Bayesian model as a soft constraint. Accelerated belief propagation is applied to obtain the maximum a posteriori estimates in the Markov random field. The proposed algorithm is evaluated using the Middlebury stereo benchmark. Our experimental results comparing with some state-of-the-art stereo matching methods demonstrate that the proposed method provides superior disparity maps with a subpixel precision.

  7. IMPACT: Science Goals and Firsts with STEREO

    NASA Astrophysics Data System (ADS)

    Luhmann, J.; Curtis, D.; Impact Team

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument, described in a companion presentation, will make plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment, also described in this session, will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. IMPACT is also expected to test the patterns of solar energetic particle (SEP) events inferred from earlier statistical studies and multipoint measurements, and to resolve outstanding questions of source location(s) and seed populations with combinations of SEP composition measurements and other IMPACT and STEREO observations. Additional important insight will result from ongoing L1 viewpoint imaging and in-situ measurements from SOHO, WIND and ACE. In the late 1970s and early 1980s the Helios twin spacecraft mission obtained paired plasma, field and SEP in-situ data sets complemented by the Solwind and Solar Maximum Mission (SMM) coronagraph images of CMEs, and near-Earth in-situ data from IMP-8 and ISEE-3. IMPACT's updated instruments and modern data processing, transmitting, and analysis capabilities, with support from the other STEREO investigations and today's realistic 3D models of the corona, solar wind and CMEs, will prove or in some cases revolutionize the observational paradigms gleaned in the era of the Helios/Solwind/SMM/IMP-8/ISEE3 spacecraft combinations. The resulting views of in-situ space weather will without doubt leave us with a forever transformed perspective of L1

  8. Stereo-particle image velocimetry uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  9. Colour, vision and ergonomics.

    PubMed

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  10. TrkB Activators for the Treatment of Traumatic Vision Loss

    DTIC Science & Technology

    2015-10-01

    receptor for brain -derived neurotrophic factor (BDNF). BDNF has been shown to have neuroprotective effects in a number of degeneration models ...4 HIOC has also demonstrated protective activity in an animal model for light-induced retinal degeneration, and can pass the blood- brain and blood...14. ABSTRACT Pressure waves due to explosions can damage the neurons of the eye and visual centers in the brain , leading to functional loss of

  11. Present Vision--Future Vision.

    ERIC Educational Resources Information Center

    Fitterman, L. Jeffrey

    This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…

  12. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  13. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  14. Pleiades Visions

    NASA Astrophysics Data System (ADS)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  15. A Chang'e-4 mission concept and vision of future Chinese lunar exploration activities

    NASA Astrophysics Data System (ADS)

    Wang, Qiong; Liu, Jizhong

    2016-10-01

    A novel concept for Chinese Chang'e-4 lunar exploration mission is presented in this paper at first. After the success of Chang'e-3, its backup probe, Chang'e-4 lander/rover combination, would be upgraded and land on the unexplored lunar farside by the aid of a relay satellite near the second Earth-Moon Lagrange point. Mineralogical and geochemical surveys on the farside to study the formation and evolution of lunar crust and observations at low radio frequencies to track the signals of the Universe's Dark Ages are priorities. Follow-up Chinese lunar exploration activities before 2030 are envisioned as building a robotic lunar science station by three to five missions. Finally several methods of international cooperation are proposed.

  16. Cartesian visions.

    PubMed

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  17. 3-D sensor using relative stereo method for bio-seedlings transplanting system

    NASA Astrophysics Data System (ADS)

    Hiroyasu, Takehisa; Hayashi, Jun'ichiro; Hojo, Hirotaka; Hata, Seiji

    2005-12-01

    In the plant factory of crone seedlings, most of the production processes are highly automated, but the transplanting process of the small seedlings is hard to be automated because the figures of small seedlings are not stable and to handle the seedlings it is required to observe the shapes of the small seedlings. Here, a 3-D vision system for robot to be used for the transplanting process in a plant factory has been introduced. This system has been employed relative stereo method and slit light measuring method and it can detect the shape of small seedlings and decides the cutting point. In this paper, the structure of the vision system and the image processing method for the system is explained.

  18. Machine vision

    SciTech Connect

    Horn, D.

    1989-06-01

    To keep up with the speeds of modern production lines, most machine vision applications require very powerful computers (often parallel-processing machines), which process millions of points of data in real time. The human brain performs approximately 100 billion logical floating-point operations each second. That is 400 times the speed of a Cray-1 supercomputer. The right software must be developed for parallel-processing computers. The NSF has awarded Rensselaer Polytechnic Institute (Troy, N.Y.) a $2 million grant for parallel- and image-processing software research. Over the last 15 years, Rensselaer has been conducting image-processing research, including work with high-definition TV (HDTV) and image coding and understanding. A similar NSF grant has been awarded to Michigan State University (East Lansing, Mich.) Neural networks are supposed to emulate human learning patterns. These networks and their hardware implementations (neurocomputers) show a great deal of promise for machine vision systems because they allow the systems to understand the use sensory data input more effectively. Neurocomputers excel at pattern-recognition tasks when input data are fuzzy or the vision algorithm is not optimal and is difficult to ascertain.

  19. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  20. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  1. Testing the horizontal-vertical stereo anisotropy with the critical-band masking paradigm.

    PubMed

    Serrano-Pedraza, Ignacio; Brash, Claire; Read, Jenny C A

    2013-09-26

    Stereo vision has a well-known anisotropy: At low frequencies, horizontally oriented sinusoidal depth corrugations are easier to detect than vertically oriented corrugations (both defined by horizontal disparities). Previously, Serrano-Pedraza and Read (2010) suggested that this stereo anisotropy may arise because the stereo system uses multiple spatial-frequency disparity channels for detecting horizontally oriented modulations but only one for vertically oriented modulations. Here, we tested this hypothesis using the critical-band masking paradigm. In the first experiment, we measured disparity thresholds for horizontal and vertical sinusoids near the peak of the disparity sensitivity function (0.4 cycles/°), in the presence of either broadband or notched noise. We fitted the power-masking model to our results assuming a channel centered on 0.4 cycles/°. The estimated channel bandwidths were 2.95 octaves for horizontal and 2.62 octaves for vertical corrugations. In our second experiment we measured disparity thresholds for horizontal and vertical sinusoids of 0.1 cycles/° in the presence of band-pass noise centered on 0.4 cycles/° with a bandwidth of 0.5 octaves. This mask had only a small effect on the disparity thresholds, for either horizontal or vertical corrugations. We simulated the detection thresholds using the power-masking model with the parameters obtained in the first experiment and assuming either single-channel and multiple-channel detection. The multiple-channel model predicted the thresholds much better for both horizontal and vertical corrugations. We conclude that the human stereo system must contain multiple independent disparity channels for detecting horizontally oriented and vertically oriented depth modulations.

  2. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  3. The analogy between stereo depth and brightness.

    PubMed

    Brookes, A; Stevens, K A

    1989-01-01

    Apparent depth in stereograms exhibits various simultaneous-contrast and induction effects analogous to those reported in the luminance domain. This behavior suggests that stereo depth, like brightness, is reconstructed, ie recovered from higher-order spatial derivatives or differences of the original signal. The extent to which depth is analogous to brightness is examined. There are similarities in terms of contrast effects but dissimilarities in terms of the lateral inhibition effects traditionally attributed to underlying spatial-differentiation operators.

  4. Intelligent robots and computer vision; Proceedings of the Meeting, Cambridge, MA, Nov. 2-6, 1987

    SciTech Connect

    Casasent, D.P.; Hall, E.L.

    1988-01-01

    Topics discussed include pattern recognition, image processing, sensors, model-based object recognition, image understanding, artificial neural systems, and three-dimensional object recognition. Consideration is also given to stereo image processing, optical flow, intelligent control, vision-aided automated control systems, architectures and software, and industrial applications.

  5. Opportunity's Surroundings on Sol 1687 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses.

    Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction.

    Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast.

    This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  6. Opportunity's Surroundings on Sol 1798 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  7. Stereo cameras on the International Space Station

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Visentin, Gianfranco; Collon, Max; Ranebo, Hans; Sunderland, David; Fortezza, Raimondo

    2007-02-01

    Three-dimensional media is a unique and efficient means to virtually visit/observe objects that cannot be easily reached otherwise, like the International Space Station. The advent of auto-stereoscopic displays and stereo projection system is making the stereo media available to larger audiences than the traditional scientists and design engineers communities. It is foreseen that a major demand for 3D content shall come from the entertainment area. Taking advantage of the 6 months long permanence on the International Space Station of a colleague European Astronaut, Thomas Reiter, the Erasmus Centre uploaded to the ISS a newly developed, fully digital stereo camera, the Erasmus Recording Binocular. Testing the camera and its human interfaces in weightlessness, as well as accurately mapping the interior of the ISS are the main objectives of the experiment that has just been completed at the time of writing. The intent of this paper is to share with the readers the design challenges tackled in the development and operation of the ERB camera and highlight some of the future plans the Erasmus Centre team has in the pipeline.

  8. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803

    NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009).

    By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  9. Activating a Vision

    ERIC Educational Resources Information Center

    Wilson, Carroll L.

    1973-01-01

    International Center of Insect Physiology and Ecology (ICIPE) is an organized effort to study physiology, endocrinology, genetics, and related processes of five insects. Location of the center in Kenya encourages developing countries to conduct research for the control of harmful insects. (PS)

  10. Vision Therapy News Backgrounder.

    ERIC Educational Resources Information Center

    American Optometric Association, St. Louis, MO.

    The booklet provides an overview on vision therapy to aid writers, editors, and broadcasters help parents, teachers, older adults, and all consumers learn more about vision therapy. Following a description of vision therapy or vision training, information is provided on how and why vision therapy works. Additional sections address providers of…

  11. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  12. Graphics for Stereo Visualization Theater for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Antipuesto, Joel; Reid, Lisa (Technical Monitor)

    1998-01-01

    The Stereo Visualization Theater is a high-resolution graphics demonstration that prides a review of current research being performed at NASA. Using a stereoscopic projection, multiple participants can explore scientific data in new ways. The pre-processed audio and video are being played in real-time off of a workstation. A stereo graphics filter for the projector and passive polarized glasses worn by audience members are used to create the stereo effect.

  13. The World Water Vision: From Developing a Vision to Action

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, S.; Cosgrove, W.; Rijsberman, F.; Strzepek, K.; Strzepek, K.

    2001-05-01

    The World Water Vision exercise was initiated by the World Water Commission under the auspices of the World Water Council. The goal of the World Water Vision project was to develop a widely shared vision on the actions required to achieve a common set of water-related goals and the necessary commitment to carry out these actions. The Vision should be participatory in nature, including input from both developed and developing regions, with a special focus on the needs of the poor, women, youth, children and the environment. Three overall objectives were to: (i)raise awareness of water issues among both the general population and decision-makers so as to foster the necessary political will and leadership to tackle the problems seriously and systematically; (ii) develop a vision of water management for 2025 that is shared by water sector specialists as well as international, national and regional decision-makers in government, the private sector and civil society; and (iii) provide input to a Framework for Action to be elaborated by the Global Water Partnership, with steps to go from vision to action, including recommendations to funding agencies for investment priorities. This exercise was characterized by the principles of: (i) a participatory approach with extensive consultation; (ii) Innovative thinking; (iii) central analysis to assure integration and co-ordination; and (iv) emphasis on communication with groups outside the water sector. The primary activities included, developing global water scenarios that fed into regional consultations and sectoral consultations as water for food, water for people - water supply and sanitation, and water and environment. These consultations formulated the regional and sectoral visions that were synthesized to form the World Water Vision. The findings from this exercise were reported and debated at the Second World Water Forum and the Ministerial Conference held in The Hague, The Netherlands during April 2000. This paper

  14. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    NASA Astrophysics Data System (ADS)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  15. Stereopsis and disparity vergence in monkeys with subnormal binocular vision.

    PubMed

    Harwerth, R S; Smith, E L; Crawford, M L; von Noorden, G K

    1997-02-01

    The surgical treatment for strabismus in infants generally results in microtropia or subnormal binocular vision. Although the clinical characteristics of these conditions are well established, there are important questions about the mechanisms of binocular vision in these patients that can best be investigated in an appropriate animal model. In the present psychophysical investigations, spatial frequency response functions for disparity-induced fusional vergence and for local stereopsis were studied in macaque monkeys, who demonstrated many of the major visual characteristics of patients whose eyes were surgically aligned during infancy. In six rhesus monkeys, unilateral esotropia was surgically induced at various ages (30-184 days of age). However, over the next 12 months, all of the monkeys recovered normal eye alignment. Behavioral measurements at 4-6 years of age showed that the monkeys' prism-induced fusional vergence responses were indistinguishable from those of control monkeys or humans with normal binocular vision. Investigations of stereo-depth discrimination demonstrated that each of the experimental monkeys also had stereoscopic vision, but their stereoacuities varied from being essentially normal to severely stereo-deficient. The degree of stereo-deficiency was not related to the age at which surgical esotropia was induced, or to the presence or absence of amblyopia, and was not dependent on the spatial frequency of the test stimulus. Altogether, these experiments demonstrate that a temporary, early esotropia can affect the binocular disparity responses of motor and sensory components of binocular vision differently, probably because of different sensitive periods of development for the two components.

  16. PHOTOCOPY OF EARLY STEREO VIEW OF CARPENTERS' HALL. Date and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PHOTOCOPY OF EARLY STEREO VIEW OF CARPENTERS' HALL. Date and photographer unknown. Original in Carpenters' Hall - Carpenters' Company Hall, 320 Chestnut Street & Carpenters' Court, Philadelphia, Philadelphia County, PA

  17. Human gene therapy for RPE65 isomerase deficiency activates the retinoid cycle of vision but with slow rod kinetics.

    PubMed

    Cideciyan, Artur V; Aleman, Tomas S; Boye, Sanford L; Schwartz, Sharon B; Kaushal, Shalesh; Roman, Alejandro J; Pang, Ji-Jing; Sumaroka, Alexander; Windsor, Elizabeth A M; Wilson, James M; Flotte, Terence R; Fishman, Gerald A; Heon, Elise; Stone, Edwin M; Byrne, Barry J; Jacobson, Samuel G; Hauswirth, William W

    2008-09-30

    The RPE65 gene encodes the isomerase of the retinoid cycle, the enzymatic pathway that underlies mammalian vision. Mutations in RPE65 disrupt the retinoid cycle and cause a congenital human blindness known as Leber congenital amaurosis (LCA). We used adeno-associated virus-2-based RPE65 gene replacement therapy to treat three young adults with RPE65-LCA and measured their vision before and up to 90 days after the intervention. All three patients showed a statistically significant increase in visual sensitivity at 30 days after treatment localized to retinal areas that had received the vector. There were no changes in the effect between 30 and 90 days. Both cone- and rod-photoreceptor-based vision could be demonstrated in treated areas. For cones, there were increases of up to 1.7 log units (i.e., 50 fold); and for rods, there were gains of up to 4.8 log units (i.e., 63,000 fold). To assess what fraction of full vision potential was restored by gene therapy, we related the degree of light sensitivity to the level of remaining photoreceptors within the treatment area. We found that the intervention could overcome nearly all of the loss of light sensitivity resulting from the biochemical blockade. However, this reconstituted retinoid cycle was not completely normal. Resensitization kinetics of the newly treated rods were remarkably slow and required 8 h or more for the attainment of full sensitivity, compared with <1 h in normal eyes. Cone-sensitivity recovery time was rapid. These results demonstrate dramatic, albeit imperfect, recovery of rod- and cone-photoreceptor-based vision after RPE65 gene therapy.

  18. Healthy Living, Healthy Vision

    MedlinePlus

    ... Eye Emergencies How to Jump Start a Car Battery Safely Electronic Screens and Your Eyes Nutrition and ... External Resources The Cost of Vision Problems The Future of Vision Vision Problems in the U.S. Healthy ...

  19. Pregnancy and Your Vision

    MedlinePlus

    ... Eye Emergencies How to Jump Start a Car Battery Safely Electronic Screens and Your Eyes Nutrition and ... External Resources The Cost of Vision Problems The Future of Vision Vision Problems in the U.S. Healthy ...

  20. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  1. Binocular Vision

    PubMed Central

    Blake, Randolph; Wilson, Hugh

    2010-01-01

    This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722

  2. Robot Vision

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  3. Artificial vision.

    PubMed

    Humayun, M S; de Juan, E

    1998-01-01

    Outer retinal degenerations such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) lead to blindness because of photoreceptor degeneration. To test whether controlled electrical stimulation of the remaining retinal neurons could provide form vision, we electrically stimulated the inner retinal surface with micro-electrodes inserted through the sclera/eye wall of 14 of these patients (12 RP and 2 AMD). This procedure was performed in the operating room under local anaesthesia and all responses were recorded via a video camera mounted on the surgical microscope. Electrical stimulation of the inner retinal surface elicited visual perception of a spot of light (phosphene) in all subjects. This perception was retinotopically correct in 13 of 14 patients. In a resolution test in a subject with no light perception, the patient could resolve phosphenes at 1.75 degrees centre-to-centre distance (i.e. visual acuity compatible with mobility; Snellen visual acuity of 4/200).

  4. Vision Screening

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  5. FM Stereo and AM Stereo: Government Standard-Setting vs. the Marketplace.

    ERIC Educational Resources Information Center

    Huff, W. A. Kelly

    The emergence of frequency modulation or FM radio signals, which arose from the desire to free broadcasting of static noise common to amplitude modulation or AM, has produced the controversial development of stereo broadcasting. The resulting enhancement of sound quality helped FM pass AM in audience shares in less than two decades. The basic…

  6. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  7. Synchronized observations by using the STEREO and the largest ground-based decametre radio telescope

    NASA Astrophysics Data System (ADS)

    Konovalenko, A. A.; Stanislavsky, A. A.; Rucker, H. O.; Lecacheux, A.; Mann, G.; Bougeret, J.-L.; Kaiser, M. L.; Briand, C.; Zarka, P.; Abranin, E. P.; Dorovsky, V. V.; Koval, A. A.; Mel'nik, V. N.; Mukha, D. V.; Panchenko, M.

    2013-08-01

    We consider the approach to simultaneous (synchronous) solar observations of radio emission by using the STEREO-WAVES instruments (frequency range 0.125-16 MHz) and the largest ground-based low-frequency radio telescope. We illustrate it by the UTR-2 radio telescope implementation (10-30 MHz). The antenna system of the radio telescope is a T-shape-like array of broadband dipoles and is located near the village Grakovo in the Kharkiv region (Ukraine). The third observation point on the ground in addition to two space-based ones improves the space-mission performance capabilities for the determination of radio-emission source directivity. The observational results from the high sensitivity antenna UTR-2 are particularly useful for analysis of STEREO data in the condition of weak event appearances during solar activity minima. In order to improve the accuracy of flux density measurements, we also provide simultaneous observations with a large part of the UTR-2 radio telescope array and its single dipole close to the STEREO-WAVES antennas in sensitivity. This concept has been studied by comparing the STEREO data with ground-based records from 2007-2011 and shown to be effective. The capabilities will be useful in the implementation of new instruments (LOFAR, LWA, MWA, etc.) and during the future Solar Orbiter mission.

  8. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  9. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  10. Small Orbital Stereo Tracking Camera Technology Development

    NASA Astrophysics Data System (ADS)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  11. STEREO Superior Solar Conjunction Mission Phase

    NASA Technical Reports Server (NTRS)

    Ossing, Daniel A.; Wilson, Daniel; Balon, Kevin; Hunt, Jack; Dudley, Owen; Chiu, George; Coulter, Timothy; Reese, Angel; Cox, Matthew; Srinivasan, Dipak; Denissen, Ronald; Quinn, David A.

    2017-01-01

    With its long duration and high gain antenna (HGA) feed thermal constraint; the NASA Solar-TErestrial RElations Observatory (STEREO) solar conjunction mission phase is quite unique to deep space operations. Originally designed for a two year heliocentric orbit mission to primarily study coronal mass ejection propagation, after 8 years of continuous science data collection, the twin STEREO observatories entered the solar conjunction mission phase, for which they were not designed. Nine months before entering conjunction, an unforeseen thermal constraint threatened to stop daily communications and science data collection for 15months. With a 3.5 month long communication blackout from the superior solar conjunction, without ground commands, each observatory will reset every 3 days, resulting in 35 system resets at an Earth range of 2 AU. As the observatories will be conjoined for the first time in 8 years, a unique opportunity for calibrating the same instruments on identical spacecraft will occur. As each observatory has lost redundancy, and with only a limited fidelity hardware simulator, how can the new observatory configuration be adequately and safely tested on each spacecraft? Without ground commands, how would a 3-axis stabilized spacecraft safely manage the ever accumulating system momentum without using propellant for thrusters? Could science data still be collected for the duration of the solar conjunction mission phase? Would the observatories survive? In its second extended mission, operational resources were limited at best. This paper discusses the solutions to the STEREO superior solar conjunction operational challenges, science data impact, testing, mission operations, results, and lessons learned while implementing.

  12. Pancam Peek into 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776

    A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers.

    Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  13. 'Victoria' After Sol 950 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08778

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08778 [figure removed for brevity, see original site] Cylindrical view for PIA08778

    A drive of about 30 meters (about 100 feet) on the 950th Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 25, 2006) brought the NASA rover to within about 20 meters (about 66 feet) of the rim of 'Victoria Crater.' From that position, the rover's navigation camera took the exposures combined into this stereo anaglyph, which appears three-dimensional when viewed through red-green glasses. The scalloped shape of the crater is visible on the left edge. Due to a small dune or ripple close to the nearest part of the rim, the scientists and engineers on the rover team planned on sol 951 to drive to the right of the ripple, but not quite all the way to the rim, then to proceed to the rim the following sol. The image is presented in cylindrical projection with geometric seam correction.

    Victoria Crater is about 800 meters (one-half mile) in diameter, about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  14. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  15. Hyperspectral photometric stereo for a single capture.

    PubMed

    Ozawa, Keisuke; Sato, Imari; Yamaguchi, Masahiro

    2017-03-01

    We present a single-capture photometric stereo method using a hyperspectral camera. A spectrally and spatially designed illumination enables a point-wise estimation of reflectance spectra and surface normals from a single hyperspectral image. The illumination works as a reflectance probe in wide spectral regions where reflectance spectra are measured, and the full spectra are estimated by interpolation. It also works as the resource for shadings in other spectral regions. The accuracy of estimation is evaluated in a simulation. Also, we prepare an experimental setup and demonstrate a surface reconstruction against a real scene.

  16. SAIL--stereo-array isotope labeling.

    PubMed

    Kainosho, Masatsune; Güntert, Peter

    2009-11-01

    Optimal stereospecific and regiospecific labeling of proteins with stable isotopes enhances the nuclear magnetic resonance (NMR) method for the determination of the three-dimensional protein structures in solution. Stereo-array isotope labeling (SAIL) offers sharpened lines, spectral simplification without loss of information and the ability to rapidly collect and automatically evaluate the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as before. This review gives an overview of stable isotope labeling methods for NMR spectroscopy with proteins and provides an in-depth treatment of the SAIL technology.

  17. The Stereo-Electroencephalography: The Epileptogenic Zone.

    PubMed

    Gonzalez-Martinez, Jorge A

    2016-12-01

    The stereo-electroencephalography (SEEG) methodology and technique was developed almost 60 years ago in Europe and it has proven its efficacy and safety over the last 55 years. The main advantage of the SEEG method is the possibility to study the epileptogenic neuronal network in its dynamic and tri-dimensional aspect, with an optimal time and space correlation with the clinical semiology. In this manuscript, the technical and methodological aspects of the SEEG will be discussed focusing on the planning of SEEG implantations, technical nuances, conceptualization of the epileptogenic zone, and the different methods of SEEG-guided surgical resections and ablations.

  18. Developing stereo image based robot control system

    SciTech Connect

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W.

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  19. Venus surface roughness and Magellan stereo data

    NASA Technical Reports Server (NTRS)

    Maurice, Kelly E.; Leberl, Franz W.; Norikane, L.; Hensley, Scott

    1994-01-01

    Presented are results of some studies to develop tools useful for the analysis of Venus surface shape and its roughness. Actual work was focused on Maxwell Montes. The analyses employ data acquired by means of NASA's Magellan satellite. The work is primarily concerned with deriving measurements of the Venusian surface using Magellan stereo SAR. Roughness was considered by means of a theoretical analyses based on digital elevation models (DEM's), on single Magellan radar images combined with radiometer data, and on the use of multiple overlapping Magellan radar images from cycles 1, 2, and 3, again combined with collateral radiometer data.

  20. Developing stereo image based robot control system

    NASA Astrophysics Data System (ADS)

    Suprijadi, Pambudi, I. R.; Woran, M.; Naa, C. F.; Srigutomo, W.

    2015-04-01

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  1. Dramatic Improvements to Feature Based Stereo

    NASA Technical Reports Server (NTRS)

    Smelyansky, V. N.; Morris, R. D.; Kuehnel, F. O.; Maluf, D. A.; Cheeseman, P.

    2004-01-01

    The camera registration extracted from feature based stereo is usually considered sufficient to accurately localize the 3D points. However, for natural scenes the feature localization is not as precise as in man-made environments. This results in small camera registration errors. We show that even very small registration errors result in large errors in dense surface reconstruction. We describe a method for registering entire images to the inaccurate surface model. This gives small, but crucially important improvements to the camera parameters. The new registration gives dramatically better dense surface reconstruction.

  2. SRTM Stereo Pair: Fiji Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Sovereign Democratic Republic of the Fiji Islands, commonly known as Fiji, is an independent nation consisting of some 332 islands surrounding the Koro Sea in the South Pacific Ocean. This topographic image shows Viti Levu, the largest island in the group. With an area of 10,429 square kilometers (about 4000 square miles), it comprises more than half the area of the Fiji Islands. Suva, the capital city, lies on the southeast shore. The Nakauvadra, the rugged mountain range running from north to south, has several peaks rising above 900 meters (about 3000 feet). Mount Tomanivi, in the upper center, is the highest peak at 1324 meters (4341 feet). The distinct circular feature on the north shore is the Tavua Caldera, the remnant of a large shield volcano that was active about 4 million years ago. Gold has been mined on the margin of the caldera since the 1930s. The Nadrau plateau is the low relief highland in the center of the mountain range. The coastal plains in the west, northwest and southeast account for only 15 percent of Viti Levu's area but are the main centers of agriculture and settlement.

    This stereoscopic view was generated using preliminary topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data from the top (north) to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. Also, colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to pink at the highest elevations. This image contains about 1300 meters (4300 feet) of total relief. The stereoscopic effect was created by first draping the shading and colors back over the topographic data and then generating two differing perspectives, one for each eye. The 3-D perception is achieved by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the

  3. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  4. Unexpected spatial intensity distributions and onset timing of solar electron events observed by closely spaced STEREO spacecraft

    NASA Astrophysics Data System (ADS)

    Klassen, A.; Dresing, N.; Gómez-Herrero, R.; Heber, B.; Müller-Mellin, R.

    2016-09-01

    We present multi-spacecraft observations of four solar electron events using measurements from the Solar Electron Proton Telescope (SEPT) and the Electron Proton Helium INstrument (EPHIN) on board the STEREO and SOHO spacecraft, respectively, occurring between 11 October 2013 and 1 August 2014, during the approaching superior conjunction period of the two STEREO spacecraft. At this time the longitudinal separation angle between STEREO-A (STA) and STEREO-B (STB) was less than 72°. The parent particle sources (flares) of the four investigated events were situated close to, in between, or to the west of the STEREO's magnetic footpoints. The STEREO measurements revealed a strong difference in electron peak intensities (factor ≤12) showing unexpected intensity distributions at 1 AU, although the two spacecraft had nominally nearly the same angular magnetic footpoint separation from the flaring active region (AR) or their magnetic footpoints were both situated eastwards from the parent particle source. Furthermore, the events detected by the two STEREO imply a strongly unexpected onset timing with respect to each other: the spacecraft magnetically best connected to the flare detected a later arrival of electrons than the other one. This leads us to suggest the concept of a rippled peak intensity distribution at 1 AU formed by narrow peaks (fingers) superposed on a quasi-uniform Gaussian distribution. Additionally, two of the four investigated solar energetic particle (SEP) events show a so-called circumsolar distribution and their characteristics make it plausible to suggest a two-component particle injection scenario forming an unusual, non-uniform intensity distribution at 1 AU.

  5. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  6. Opportunity's Surroundings on Sol 1818 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view.

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  7. Deep 'Stone Soup' Trenching by Phoenix (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Digging by NASA's Phoenix Mars Lander on Aug. 23, 2008, during the 88th sol (Martian day) since landing, reached a depth about three times greater than in any trench Phoenix has excavated. The deep trench, informally called 'Stone Soup' is at the borderline between two of the polygon-shaped hummocks that characterize the arctic plain where Phoenix landed.

    Stone Soup is in the center foreground of this stereo view, which appears three dimensional when seen through red-blue glasses. The view combines left-eye and right-eye images taken by the lander's Surface Stereo Imager on Sol 88 after the day's digging. The trench is about 25 centimeters (10 inches) wide and about 18 centimeters (7 inches) deep.

    When digging trenches near polygon centers, Phoenix has hit a layer of icy soil, as hard as concrete, about 5 centimeters or 2 inches beneath the ground surface. In the Stone Soup trench at a polygon margin, the digging has not yet hit an icy layer like that.

    Stone Soup is toward the left, or west, end of the robotic arm's work area on the north side of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  8. Infrared stereo camera for human machine interface

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  9. Stereo Image of Mt. Usu Volcano

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On April 3, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra Satellite captured this image of the erupting Mt. Usu volcano in Hokkaido, Japan. This anaglyph stereo image is of Mt Usu volcano. On Friday, March 31, more than 15,000 people were evacuated by helicopter, truck and boat from the foot of Usu, that began erupting from the northwest flank, shooting debris and plumes of smoke streaked with blue lightning thousands of feet in the air. Although no lava gushed from the mountain, rocks and ash continued to fall after the eruption. The region was shaken by thousands of tremors before the eruption. People said they could taste grit from the ash that was spewed as high as 2,700 meters (8,850 ft) into the sky and fell to coat surrounding towns with ash. A 3-D view can be obtained by looking through stereo glasses, with the blue film through your left eye and red film with your right eye at the same time. North is on your right hand side. For more information, see When Rivers of Rock Flow ASTER web page Image courtesy of MITI, ERSDAC, JAROS, and the U.S./Japan ASTER Science Team

  10. Characteristics of stereo reproduction with parametric loudspeakers

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa

    2012-05-01

    A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.

  11. Feasibility of remote evaporation and precipitation estimates. [by stereo images

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.

    1974-01-01

    Remote sensing by means of stereo images obtained from flown cameras and scanners provides the potential to monitor the dynamics of pollutant mixing over large areas. Moreover, stereo technology may permit monitoring of pollutant concentration and mixing with sufficient detail to ascertain the structure of a polluted air mass. Consequently, stereo remote systems can be employed to supply data to set forth adequate regional standards on air quality. A method of remote sensing using stereo images is described. Preliminary results concerning the planar extent of a plume based on comparison with ground measurements by an alternate method, e.g., remote hot-wire anemometer technique, are supporting the feasibility of using stereo remote sensing systems.

  12. Compiling a STEREO SEP event list: 2007-2011

    NASA Astrophysics Data System (ADS)

    Papaioannou, Athanasios; Malandraki, Olga E.; Heber, Bernd; Dresing, Nina; Klein, Karl-Ludwig; Tsiropoula, Georgia; Gomez-Herrero, Raoul; Mewaldt, Richard A.; Vainio, Rami

    2013-04-01

    The STEREO (Solar TErrestrial RElations Observatory) mission employs two nearly identical space-based observatories - one ahead of Earth in its orbit (STEREO-A: STA), the other trailing behind (STEREO-B: STB) aiming at providing the first-ever stereoscopic measurements of the Sun. STEREO recordings provide an unprecedented opportunity to identify the evolution of Solar Energetic Particles (SEPs) at different observing points in the heliosphere, which is expected to provide new insight on the physics of solar particle genesis, propagation and acceleration as well as on the properties of the interplanetary magnetic field that control these acceleration and propagation processes. In this work, two instruments onboard STEREO have been used in order to identify all SEP events observed within the rising phase of solar cycle 24 from 2007 to 2011, namely: the Low Energy Telescope (LET) and the Solar Electron Proton Telescope (SEPT). A scan over STEREO/LET protons within the energy range 6-10 MeV has been performed for each of the two STEREO spacecraft. We have tracked all enhancements that have been observed above the background level of this particular channel and cross checked with available lists on STEREO/ICMEs, SIRs and shocks as well as with the reported events in literature. Furthermore, parallel scanning of the STEREO/SEPT electrons in order to pinpoint the presence (or not) of an electron event has been performed in the energy range of 55-85 keV, for all of the aforementioned proton events, included in our lists. We provide the onset of all events for both protons and electrons, time-shifting analysis for near relativistic electrons which lead to the inferred solar release time and the relevant solar associations from radio spectrographs to GOES Soft X-rays and coronal mass ejections spotted by both SOHO/LASCO and STEREO Coronographs.

  13. CAD-model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  14. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope.

    PubMed

    Gong, Yuanzheng; Johnston, Richard S; Melville, C David; Seibel, Eric J

    As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.

  15. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    PubMed Central

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-01-01

    As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm. PMID:26640425

  16. A re-evaluation of the role of vision in the activity and communication of nocturnal primates.

    PubMed

    Bearder, S K; Nekaris, K A I; Curtis, D J

    2006-01-01

    This paper examines the importance of vision in the lives of nocturnal primates in comparison to diurnal and cathemeral species. Vision is the major sense in all primates and there is evidence that the eyesight of nocturnal species is more acute and variable than has previously been recognized. Case studies of the behaviour of a galago and a loris in open woodland habitats in relation to ambient light show that Galago moholi males are more likely to travel between clumps of vegetation along the ground when the moon is up, and during periods of twilight, whereas they retreat to more continuous vegetation and travel less when the moon sets. This is interpreted as a strategy for avoiding predators that hunt on the ground when it is dark. The travel distances of Loris lydekkerianus are not affected by moonlight but this species reduces its choice of food items from more mobile prey to mainly ants when the moon sets, indicating the importance of light when searching for high-energy supplements to its staple diet. Evidence is presented for the first time to indicate key aspects of nocturnal vision that would benefit from further research. It is suggested that the light and dark facial markings of many species convey information about species and individual identity when animals approach each other at night. Differences in the colour of the reflective eye-shine, and behavioural responses displayed when exposed to white torchlight, point to different kinds of nocturnal vision that are suited to each niche, including the possibility of some degree of colour discrimination. The ability of even specialist nocturnal species to see well in broad daylight demonstrates an inherent flexibility that would enable movement into diurnal niches. The major differences in the sensitivity and perceptual anatomy of diurnal lemurs compared to diurnal anthropoids, and the emergence of cathemerality in lemurs, is interpreted as a reflection of evolution from different ancestral stocks in very

  17. Sedna Planitia (Right Member of a Synthetic Stereo Pair)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This perspective view of Venus, generated by computer from Magellan data and color-coded with emissivity, shows part of the lowland plains in Sedna Planitia. Circular depressions with associated fracture patterns, called 'coronae', are apparently unique to the lowlands of Venus, and tend to occur in linear clusters along the planet's major tectonic belts, as seen in this image. Coronae differ greatly in size and detailed morphology: the central depression may or may not lie below the surrounding plains, and may or may not be surrounded by a raised rim or a moat outside the rim. Coronae are thought to be caused by localized 'hot spot' magmatic activity in Venus' subsurface. Intrusion of magma into the crust first pushes up the surface, after which cooling and contraction create the central depression and generate a pattern of concentric fractures. In some cases, lava may be extruded onto the surface, as seen here as bright flows in the foreground. This image is the right member of a synthetic stereo pair; the other image is PIA00313. To view the region in stereo, download the two images, arrange them side by side on the screen or in hardcopy, and view this image with the right eye and the other with the left. For best viewing, use a stereoscope or size the images so that their width is close to the interpupillary distance, about 6.6 cm (2.6 inches). Magellan MIDR quadrangle* containing this image: C1- 45N011. Image resolution (m): 225. Size of region shown (E-W x N-S, in km): 1900 x 120 at front edge. Range of emissivities from violet to red: 0.82 -- 0.88. Vertical exaggeration: 20. Azimuth of viewpoint (deg clockwise from East): 13. Elevation of viewpoint (km): 300. *Quadrangle name indicates approximate center latitude (N=north, S=south) and center longitude (East).

  18. Sedna Planitia (Left Member of a Synthetic Stereo Pair)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This perspective view of Venus, generated by computer from Magellan data and color-coded with emissivity, shows part of the lowland plains in Sedna Planitia. Circular depressions with associated fracture patterns, called 'coronae', are apparently unique to the lowlands of Venus, and tend to occur in linear clusters along the planet's major tectonic belts, as seen in this image. Coronae differ greatly in size and detailed morphology: the central depression may or may not lie below the surrounding plains, and may or may not be surrounded by a raised rim or a moat outside the rim. Coronae are thought to be caused by localized 'hot spot' magmatic activity in Venus' subsurface. Intrusion of magma into the crust first pushes up the surface, after which cooling and contraction create the central depression and generate a pattern of concentric fractures. In some cases, lava may be extruded onto the surface, as seen here as bright flows in the foreground. This image is the left member of a synthetic stereo pair; the other image is PIA00314. To view the region in stereo, download the two images, arrange them side by side on the screen or in hardcopy, and view this image with the left eye and the other with the right. For best viewing, use a stereoscope or size the images so that their width is close to the interpupillary distance, about 6.6 cm (2.6 inches). Magellan MIDR quadrangle* containing this image: C1-45N011. Image resolution (m): 225. Size of region shown (E-W x N-S, in km): 1900 x 120 at front edge. Range of emissivities from violet to red: 0.82 -- 0.88. Vertical exaggeration: 20. Azimuth of viewpoint (deg clockwise from East): 13. Elevation of viewpoint (km): 300. *Quadrangle name indicates approximate center latitude (N=north, S=south) and center longitude (East).

  19. Multi-view horizon-driven sea plane estimation for stereo wave imaging on moving vessels

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2016-10-01

    In the last few years we faced an increased popularity of stereo imaging as an effective tool to investigate wind sea waves at short and medium scales. Given the advances of computer vision techniques, the recovery of a scattered point-cloud from a sea surface area is nowadays a well consolidated technique producing excellent results both in terms of wave data resolution and accuracy. Nevertheless, almost all the subsequent analyses tasks, from the recovery of directional wave spectra to the estimation of significant wave height, are bound to two limiting conditions. First, wave data are required to be aligned to the mean sea plane. Second, a uniform distribution of 3D point samples is assumed. Since the stereo-camera rig is placed tilted with respect to the sea surface, perspective distortion do not allow these conditions to be met. Errors due to this problem are even more challenging if the optical instrumentation is mounted on a moving vessel, so that the mean sea plane cannot be simply obtained by averaging data from multiple subsequent frames. We address the first problem with two main contributions. First, we propose a novel horizon estimation technique to recover the attitude of a moving stereo rig with respect to the sea plane. Second, an effective weighting scheme is described to account for the non-uniform sampling of the scattered data in the estimation of the sea-plane distance. The interplay of the two allows us to provide a precise point cloud alignment without any external positioning sensor or rig viewpoint pre-calibration. The advantages of the proposed technique are evaluated throughout an experimental section spanning both synthetic and real-world scenarios.

  20. Spirit Near 'Stapledon' on Sol 1802 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11781 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11781

    NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this stereo, full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. South is at the center; north is at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches).

    Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica.

    The team laid plans to drive Spirit from

  1. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  2. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    PubMed

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  3. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  4. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    PubMed Central

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  5. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  6. Mobile Stereo-Mapper a Portable Kit for Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.; Lee, R.

    2011-09-01

    A low-cost portable light-weight mobile stereo-mapping system (MSMS) is under development in the GeoICT Lab, Geomatics Engineering program at York University. The MSMS is designed for remote operation on board unmanned aerial vehicles (UAV) for navigation and rapid collection of 3D spatial data. Pose estimation of the camera sensors is based on single frequency RTK-GPS, loosely coupled in a Kalman filter with MEMS-based IMU. The attitude and heading reference system (AHRS) calculates orientation from the gyro data, aided by accelerometer and magnetometer data to compensate for gyro drift. Two low-cost consumer digital cameras are calibrated and time-synchronized with the GPS/IMU to provide direct georeferenced stereo vision, while a video camera is used for navigation. Object coordinates are determined using rigorous photogrammetric solutions supported by direct georefencing algorithms for accurate pose estimation of the camera sensors. Before the MSMS is considered operational its sensor components and the integrated system itself has to undergo a rigorous calibration process to determine systematic errors and biases and to determine the relative geometry of the sensors. In this paper, the methods and results for system calibration, including camera, boresight and leverarm calibrations are presented. An overall accuracy assessment of the calibrated system is given using a 3D test field.

  7. STEREO Observations of Solar Wind in 2007-2014

    NASA Astrophysics Data System (ADS)

    Jian, Lan; Luhmann, Janet; Russell, Christopher; Blanco-Cano, Xochitl; Kilpua, Emilia; Li, Yan

    2016-04-01

    Since the launch of twin STEREO spacecraft, we have been monitoring the solar wind and providing the Level 3 event lists of large-scale solar wind and particle events to public (http://www-ssc.igpp.ucla.edu/forms/stereo/stereo_level_3.html). The interplanetary coronal mass ejections (ICMEs), stream interaction regions (SIRs), interplanetary shocks, and solar energetic particles (based on high energy telescope data) have been surveyed for 2007-2014 before STEREO A went to the superior solar conjunction and STEREO B was lost in contact. In conjunction with our previous observations of same solar wind structures in 1995-2009 using Wind/ACE data and the same identification criteria, we study the solar cycle variations of these structures, especially compare the same phase of solar cycles 23 and 24. Although the sunspot number at solar maximum 24 is only 60% of the level at last solar maximum, Gopalswamy et al. (2015a, b) found there were more halo CMEs in cycle 24 and the number of magnetic clouds did not decline either. We examine if the two vantage points of STEREO provide a consistent view with the above finding. In addition, because the twin STEREO spacecraft have experienced the full-range longitudinal separation of 0-360 degree, they have provided us numerous opportunities for multipoint observations. We will report the findings on the spatial scope of ICMEs including their driven shocks, and the stability of SIRs from the large event base.

  8. STEREO Space Weather and the Space Weather Beacon

    NASA Technical Reports Server (NTRS)

    Biesecker, D. A.; Webb, D F.; SaintCyr, O. C.

    2007-01-01

    The Solar Terrestrial Relations Observatory (STEREO) is first and foremost a solar and interplanetary research mission, with one of the natural applications being in the area of space weather. The obvious potential for space weather applications is so great that NOAA has worked to incorporate the real-time data into their forecast center as much as possible. A subset of the STEREO data will be continuously downlinked in a real-time broadcast mode, called the Space Weather Beacon. Within the research community there has been considerable interest in conducting space weather related research with STEREO. Some of this research is geared towards making an immediate impact while other work is still very much in the research domain. There are many areas where STEREO might contribute and we cannot predict where all the successes will come. Here we discuss how STEREO will contribute to space weather and many of the specific research projects proposed to address STEREO space weather issues. We also discuss some specific uses of the STEREO data in the NOAA Space Environment Center.

  9. Ames stereo pipeline-derived digital terrain models of Mercury from MESSENGER stereo imaging

    NASA Astrophysics Data System (ADS)

    Fassett, Caleb I.

    2016-12-01

    In this study, 96 digital terrain models (DTMs) of Mercury were created using the Ames Stereo Pipeline, using 1456 pairs of stereo images from the Mercury Dual Imaging System instrument on MESSENGER. Although these DTMs cover only 1% of the surface of Mercury, they enable three-dimensional characterization of landforms at horizontal resolutions of 50-250 m/pixel and vertical accuracy of tens of meters. This is valuable in regions where the more precise measurements from the Mercury Laser Altimeter (MLA) are sparse. MLA measurements nonetheless provide an important geodetic framework for the derived stereo products. These DTMs, which are publicly released in conjunction with this paper, reveal topography of features at relatively small scales, including craters, graben, hollows, pits, scarps, and wrinkle ridges. Measurements from these data indicate that: (1) hollows have a median depth of 32 m, in basic agreement with earlier shadow measurement, (2) some of the deep pits (up to 4 km deep) that are interpreted to form via volcanic processes on Mercury have surrounding rims or rises, but others do not, and (3) some pits have two or more distinct, low-lying interior minima that could represent multiple vents.

  10. Robust photometric stereo using structural light sources

    NASA Astrophysics Data System (ADS)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  11. Stereo Imaging Velocimetry System and Method

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2003-01-01

    A system and a method is provided for measuring three dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. Image frames captured by the cameras may be filtered using background subtraction with outlier rejection with spike-removal filtering. The cameras may calibrated to accurately represent image coordinates in a world coordinate system using calibration grids modified using warp transformations. The two-dimensional views of the cameras may be recorded fur image processing and particle track determination. The tracer particles may be tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured there from.

  12. Stereo Images of Wind Tails Near Chimp

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This stereo image pair of the rock 'Chimp' was taken by the Sojourner rover's front cameras on Sol 72 (September 15). Fine-scale texture on Chimp and other rocks is clearly visible. Wind tails, oriented from lower right to upper left, are seen next to small pebbles in the foreground. These were most likely produced by wind action.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

  13. 'Snow White' Trench After Scraping (Stereo View)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This 3D view from the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the trench informally named 'Snow White.' This anaglyph was taken after a series of scrapings by the lander's Robotic Arm on the 58th Martian day, or sol, of the mission (July 23, 2008). The scrapings were done in preparation for collecting a sample for analysis from a hard subsurface layer where soil may contain frozen water.

    The trench is 4 to 5 centimeters (about 2 inches) deep, about 23 centimeters (9 inches) wide and about 60 centimeters (24 inches) long.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  14. Stereo View of Phoenix Test Sample Site

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This anaglyph image, acquired by NASA's Phoenix Lander's Surface Stereo Imager on Sol 7, the seventh day of the mission (June 1, 2008), shows a stereoscopic 3D view of the so-called 'Knave of Hearts' first-dig test area to the north of the lander. The Robotic Arm's scraping blade left a small horizontal depression above where the sample was taken.

    Scientists speculate that white material in the depression left by the dig could represent ice or salts that precipitated into the soil. This material is likely the same white material observed in the sample in the Robotic Arm's scoop.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. NASA Vision

    NASA Technical Reports Server (NTRS)

    Fenton, Mary (Editor); Wood, Jennifer (Editor)

    2003-01-01

    This newsletter contains several articles, primarily on International Space Station (ISS) crewmembers and their activities, as well as the activities of NASA administrators. Other subjects covered in the articles include the investigation of the Space Shuttle Columbia accident, activities at NASA centers, Mars exploration, a collision avoidance test on a unmanned aerial vehicle (UAV). The ISS articles cover landing in a Soyuz capsule, photography from the ISS, and the Expedition Seven crew.

  16. MISR Sees the Sierra Nevadas in Stereo

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These MISR images of the Sierra Nevada mountains near the California-Nevada border were acquired on August 12, 2000 during Terra orbit 3472. On the left is an image from the vertical-viewing (nadir) camera. On the right is a stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras, providing a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left.

    Some prominent features are Mono Lake, in the center of the images; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges.

    Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  17. Explaining Polarization Reversals in STEREO Wave Data

    NASA Technical Reports Server (NTRS)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (L<2). Hodograms of the electric field in the plane transverse to the magnetic field showed that the transmitter waves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only righthand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  18. 'Lyell' Panorama inside Victoria Crater (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay.

    Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.'

    This view incorporates many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). It combines a stereo pair so that it appears three-dimensional when seen through blue-red glasses. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera.

    Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).

  19. On the Rim of 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08780

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08780

    NASA's Mars rover Opportunity reached the rim of 'Victoria Crater' in Mars' Meridiani Planum region with a 26-meter (85-foot) drive during the rover's 951st Martian day, or sol (Sept. 26, 2006). After the drive, the rover's navigation camera took the three exposures combined into this view of the crater's interior. This crater has been the mission's long-term destination for the past 21 Earth months.

    A half mile in the distance one can see about 20 percent of the far side of the crater framed by the rocky cliffs in the foreground to the left and right of the image. The rim of the crater is composed of alternating promontories, rocky points towering approximately 70 meters (230 feet) above the crater floor, and recessed alcoves. The bottom of the crater is covered by sand that has been shaped into ripples by the Martian wind.

    The position at the end of the sol 951 drive is about six meters from the lip of an alcove called 'Duck Bay.' The rover team planned a drive for sol 952 that would move a few more meters forward, plus more imaging of the near and far walls of the crater.

    Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  20. STEREO database of interplanetary Langmuir electric waveforms

    NASA Astrophysics Data System (ADS)

    Briand, C.; Henri, P.; Génot, V.; Lormant, N.; Dufourg, N.; Cecconi, B.; Nguyen, Q. N.; Goetz, K.

    2016-02-01

    This paper describes a database of electric waveforms that is available at the Centre de Données de la Physique des Plasmas (CDPP, http://cdpp.eu/). This database is specifically dedicated to waveforms of Langmuir/Z-mode waves. These waves occur in numerous kinetic processes involving electrons in space plasmas. Statistical analysis from a large data set of such waves is then of interest, e.g., to study the relaxation of high-velocity electron beams generated at interplanetary shock fronts, in current sheets and magnetic reconnection region, the transfer of energy between high and low frequencies, the generation of electromagnetic waves. The Langmuir waveforms were recorded by the Time Domain Sampler (TDS) of the WAVES radio instrument on board the STEREO mission. In this paper, we detail the criteria used to identify the Langmuir/Z-mode waves among the whole set of waveforms of the STEREO spacecraft. A database covering the November 2006 to August 2014 period is provided. It includes electric waveforms expressed in the normalized frame (B,B × Vsw,B × (B × Vsw)) with B and Vsw the local magnetic field and solar wind velocity vectors, and the local magnetic field in the variance frame, in an interval of ±1.5 min around the time of the Langmuir event. Quicklooks are also provided that display the three components of the electric waveforms together with the spectrum of E∥, together with the magnitude and components of the magnetic field in the 3 min interval, in the variance frame. Finally, the distribution of the Langmuir/Z-mode waves peak amplitude is also analyzed.