Science.gov

Sample records for active stereo vision

  1. Active stereo vision routines using PRISM-3

    NASA Astrophysics Data System (ADS)

    Antonisse, Hendrick J.

    1992-11-01

    This paper describes work in progress on a set of visual routines and supporting capabilities implemented on the PRISM-3 real-time vision system. The routines are used in an outdoor robot retrieval task. The task requires the robot to locate a donor agent -- a Hero2000 -- which holds the object to be retrieved, to navigate to the donor, to accept the object from the donor, and return to its original location. The routines described here will form an integral part of the navigation and wide-area search tasks. Active perception is exploited to locate the donor using real-time stereo ranging directed by a pan/tilt/verge mechanism. A framework for orchestrating visual search has been implemented and is briefly described.

  2. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  3. Stereo vision and strabismus

    PubMed Central

    Read, J C A

    2015-01-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234

  4. Stereo vision and strabismus.

    PubMed

    Read, J C A

    2015-02-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234

  5. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  6. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269604

  7. Neural architectures for stereo vision

    PubMed Central

    2016-01-01

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269604

  8. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  9. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  10. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  11. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  12. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Anderson, Charles H.; Matthies, Larry H.

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  13. Stereo Vision By Pyramidal Bli Graph Matching

    NASA Astrophysics Data System (ADS)

    Shen, Jun; Castan, Serge; Zhao, Jian

    1988-04-01

    We propose the pyramidal BLI (Binary Laplacian Image) graph matching method for stereo vision, which uses the local as well as the global similarities to assure a good precision of matching results and to eliminate the ambiguities. Because the BLI is detected by DRF method which has a fast realization and matching between graphs is fast, a pseudo-real time system is possible.

  14. A New Fast Algorithm of Stereo Vision

    NASA Astrophysics Data System (ADS)

    Shen, Jun; Castan, Serge

    1986-06-01

    In this paper, the DRF (Difference of Recursive Filters) method is proposed for stereo vision. One obtains the BLI's (Binary Laplacian Image) of the stereopair images by DRF method and the disparities are found by the correlation between the BLI's. Some experimental results are presented also.

  15. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  16. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  17. Stereo Vision: The Haves and Have-Nots

    PubMed Central

    To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R.

    2015-01-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible. PMID:27433314

  18. Stereo Vision: The Haves and Have-Nots.

    PubMed

    Hess, Robert F; To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R

    2015-06-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible. PMID:27433314

  19. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  20. Classification error analysis in stereo vision

    NASA Astrophysics Data System (ADS)

    Gross, Eitan

    2015-07-01

    Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.

  1. Binocular stereo vision system design for lunar rover

    NASA Astrophysics Data System (ADS)

    Chu, Jun; Jiao, Chunlin; Guo, Hang; Zhang, Xiaoyu

    2007-11-01

    In this paper, we integrate a pair of CCD cameras and a digital pan/title of two degrees of freedom into a binocular stereo vision system, which simulates the panoramic cameras system of the lunar rover. The constraints for placement and parameters choice of the stereo cameras pair are proposed based on science objective of Chang'e-IImission. And then these constraints are applied to our binocular stereo vision system and analyzed the location precise of it. Simulation and experimental result confirm the constraints proposed and the analysis of the location precise.

  2. A stereo model based upon mechanisms of human binocular vision

    NASA Technical Reports Server (NTRS)

    Griswold, N. C.; Yeh, C. P.

    1986-01-01

    A model for stereo vision, which is based on the human-binocular vision system, is proposed. Data collected from studies of neurophysiology of the human binocular system are discussed. An algorithm for the implementation of this stereo vision model is derived. The algorithm is tested on computer-generated and real scene images. Examples of a computer-generated image and a grey-level image are presented. It is noted that the proposed method is computationally efficient for depth perception, and the results indicate accuracies that are noise tolerant.

  3. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  4. Passive Night Vision Sensor Comparison for Unmanned Ground Vehicle Stereo Vision Navigation

    NASA Technical Reports Server (NTRS)

    Owens, Ken; Matthies, Larry

    2000-01-01

    One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 micrometer cooled FLIR, 8-12 micrometer cooled FLIR, 8-12 micrometer uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.

  5. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  6. Continuous motion using task-directed stereo vision

    NASA Technical Reports Server (NTRS)

    Gat, Erann; Loch, John L.

    1992-01-01

    The performance of autonomous mobile robots performing complex navigation tasks can be dramatically improved by directing expensive sensing and planning in service of the task. The task-direction algorithms can be quite simple. In this paper we describe a simple task-directed vision system which has been implemented on a real outdoor robot which navigates using stereo vision. While the performance of this particular robot was improved by task-directed vision, the performance of task-directed vision in general is influenced in complex ways by many factors. We briefly discuss some of these, and present some initial simulated results.

  7. Self-supervised learning in cooperative stereo vision correspondence.

    PubMed

    Decoux, B

    1997-02-01

    This paper presents a neural network model of stereoscopic vision, in which a process of fusion seeks the correspondence between points of stereo inputs. Stereo fusion is obtained after a self-supervised learning phase, so called because the learning rule is a supervised-learning rule in which the supervisory information is autonomously extracted from the visual inputs by the model. This supervisory information arises from a global property of the potential matches between the points. The proposed neural network, which is of the cooperative type, and the learning procedure, are tested with random-dot stereograms (RDS) and feature points extracted from real-world images. Those feature points are extracted by a technique based on the use of sigma-pi units. The matching performance and the generalization ability of the model are quantified. The relationship between what have been learned by the network and the constraints used in previous cooperative models of stereo vision, is discussed. PMID:9228582

  8. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  9. Problem-oriented stereo vision quality evaluation complex

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Gusamutdinova, N.; Konovalenko, I.; Ershov, E.

    2015-12-01

    We describe an original low cost hardware setting for efficient testing of stereo vision algorithms. The method uses a combination of a special hardware setup and mathematical model and is easy to construct, precise in applications of our interest. For a known scene we derive its analytical representation, called virtual scene. Using a four point correspondence between the scene and virtual one we compute extrinsic camera parameters, and project virtual scene on the image plane, which is the ground truth for depth map. Another result, presented in this paper, is a new depth map quality metric. Its main purpose is to tune stereo algorithms for particular problem, e.g. obstacle avoidance.

  10. Stereo vision for planetary rovers - Stochastic modeling to near real-time implementation

    NASA Technical Reports Server (NTRS)

    Matthies, Larry

    1991-01-01

    JPL has achieved the first autonomous cross-country robotic traverses to use stereo vision, with all computing onboard the vehicle. This paper describes the stereo vision system, including the underlying statistical model and the details of the implementation. It is argued that the overall approach provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  12. Stereo vision based automated grasp planning

    SciTech Connect

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm.

  13. Object tracking in a stereo and infrared vision system

    NASA Astrophysics Data System (ADS)

    Colantonio, S.; Benvenuti, M.; Di Bono, M. G.; Pieri, G.; Salvetti, O.

    2007-01-01

    In this paper, we deal with the problem of real-time detection, recognition and tracking of moving objects in open and unknown environments using an infrared (IR) and visible vision system. A thermo-camera and two stereo visible-cameras synchronized are used to acquire multi-source information: three-dimensional data about target geometry and its thermal information are combined to improve the robustness of the tracking procedure. Firstly, target detection is performed by extracting its characteristic features from the images and then by storing the computed parameters on a specific database; secondly, the tracking task is carried on using two different computational approaches. A Hierarchical Artificial Neural Network (HANN) is used during active tracking for the recognition of the actual target, while, when partial occlusions or masking occur, a database retrieval method is used to support the search of the correct target followed. A prototype has been tested on case studies regarding the identification and tracking of animals moving at night in an open environment, and the surveillance of known scenes for unauthorized access control.

  14. A Portable Stereo Vision System for Whole Body Surface Imaging

    PubMed Central

    Yu, Wurong; Xu, Bugao

    2009-01-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy. PMID:20161620

  15. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  16. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  17. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  18. Stereo vision controlled bilateral telerobotic remote assembly station

    NASA Astrophysics Data System (ADS)

    Dewitt, Robert L.

    1992-05-01

    The objective of this project was to develop a bilateral six degree-of-freedom telerobotic component assembly station utilizing remote stereo vision assisted control. The component assembly station consists of two Unimation Puma 260 robot arms and their associated controls, two Panasonic miniature camera systems, and an air compressor. The operator controls the assembly station remotely via kinematically similar master controllers. A Zenith 386 personal computer acts as an interface and system control between the human operator's controls and the Val II computer controlling the arms. A series of tasks, ranging in complexity and difficulty, was utilized to assess and demonstrate the performance of the complete system.

  19. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  20. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  1. Trinocular stereo vision method based on mesh candidates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  2. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  3. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  4. Movement Simulation for Wheeled Mobile Robot Based on Stereo Vision

    NASA Astrophysics Data System (ADS)

    Gao, Hongwei; Chen, Fuguo; Li, Dong; Yu, Yang

    According to the application of the planet detection, the kinematics model of six wheels rock-bogie robot was studied in this paper. A corresponding robot movement simulation system based on stereo vision and virtual reality was developed. The system provided motion parameters of virtual robot to real robot by means of out-line teaching, which ensured the robot's safety. The visual orientation theory and the simulation system realization based on OpenGL were introduced in detail. The simulation was finished based on synthesis terrain and real terrain, and the results show that the system possesses preferable interactive characteristic, and will provide some key techniques for virtual navigation, teleoperation of planetary exploration robot.

  5. Shape determination for large flexible satellites via stereo vision

    NASA Astrophysics Data System (ADS)

    Tse, D. N. C.; Heppler, G. R.

    1992-02-01

    The use of stereo vision to determine the deformed shape of an elastic plate is investigated. The quantization error associated with using discrete charge coupled device camera images for this purpose is examined. An upper bound on the error is derived in terms of the stationary configuration parameters. An expression for the average (root mean square) error is also developed. The issue of interpolating the shape of the plate through erroneous data is addressed. The vibratory mode shapes are used as interpolation functions and two cases are considered: the case when the number of interpolation points (targets) is the same as the number of modes used in the interpolation, and the case when the number of targets exceeds the number of the modes used. Error criteria are established for both cases and they provide a means of establishing the best fit to the measured data.

  6. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  7. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    PubMed Central

    Marrón-Romera, Marta; García, Juan C.; Sotelo, Miguel A.; Pizarro, Daniel; Mazo, Manuel; Cañas, José M.; Losada, Cristina; Marcos, Álvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. PMID:22163385

  8. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  9. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  10. Non-probabilistic cellular automata-enhanced stereo vision simultaneous localization and mapping

    NASA Astrophysics Data System (ADS)

    Nalpantidis, Lazaros; Sirakoulis, Georgios Ch; Gasteratos, Antonios

    2011-11-01

    In this paper, a visual non-probabilistic simultaneous localization and mapping (SLAM) algorithm suitable for area measurement applications is proposed. The algorithm uses stereo vision images as its only input and processes them calculating the depth of the scenery, detecting occupied areas and progressively building a map of the environment. The stereo vision-based SLAM algorithm embodies a stereo correspondence algorithm that is tolerant to illumination differentiations, the robust scale- and rotation-invariant feature detection and matching speeded-up robust features method, a computationally effective v-disparity image calculation scheme, a novel map-merging module, as well as a sophisticated cellular automata-based enhancement stage. A moving robot equipped with a stereo camera has been used to gather image sequences and the system has autonomously mapped and measured two different indoor areas.

  11. Application of Stereo Vision to the Reconnection Scaling Experiment

    SciTech Connect

    Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.; Intrator, Thomas P.; Weber, Thomas

    2012-08-14

    The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, we will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.

  12. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  13. Research on the application of single camera stereo vision sensor in three-dimensional point measurement

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-feng; Pan, Di-fu

    2015-09-01

    A single camera stereo vision sensor model based on planar mirror imaging is proposed for measuring a three-dimensional point. The model consists of a CCD camera and a planar mirror. Using planar mirror reflections of a scene, a picture with parallax is obtained by shooting the target object and its virtual image. This is equivalent to shooting the target object from different angles with the camera and the virtual camera in the planar mirror, so it has the function of binocular stereo vision. In addition, the measurement theory of the three-dimensional point is discussed. The mathematical model of a single camera stereo vision sensor is established, the intrinsic and extrinsic parameters are calibrated, and the corresponding experiment has been done. The experimental results show that the measuring method is convenient and effective; it also has the advantages of simple structure, convenient adjustment, and is especially suitable for short-distance measurement with high precision.

  14. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  15. CCMC Support of Active Missions: STEREO, THEMIS

    NASA Technical Reports Server (NTRS)

    Szabeo, A.

    2007-01-01

    The Coordinated Community Modeling Center has been providing custom support for current active missions, such as STEREO and THEMIS. Global heliospheric and magnetospheric MHD model results and their presentation along the actual spacecraft trajectories are invaluable for the rapid contextualization of the observations. User feedback will be provided from the point of view of a mission scientist with suggestions for future improvements.

  16. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison

    NASA Astrophysics Data System (ADS)

    Kazmi, Wajahat; Foix, Sergi; Alenyà, Guillem; Andersen, Hans Jørgen

    2014-02-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposure times of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of ToF cameras for a scene involving both shadow and sunlight exposures at the same time by taking advantage of camera flags (PMD) or confidence matrix (SwissRanger).

  17. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  18. The Effects of Avatars, Stereo Vision and Display Size on Reaching and Motion Reproduction.

    PubMed

    Camporesi, Carlo; Kallmann, Marcelo

    2016-05-01

    Thanks to recent advances on motion capture devices and stereoscopic consumer displays, animated virtual characters can now realistically interact with users in a variety of applications. We investigate in this paper the effect of avatars, stereo vision and display size on task execution in immersive virtual environments. We report results obtained with three experiments in varied configurations that are commonly used in rehabilitation applications. The first experiment analyzes the accuracy of reaching tasks under different system configurations: with and without an avatar, with and without stereo vision, and employing a 2D desktop monitor versus a large multi-tile visualization display. The second experiment analyzes the use of avatars and user-perspective stereo vision on the ability to perceive and subsequently reproduce motions demonstrated by an autonomous virtual character. The third experiment evaluates the overall user experience with a complete immersive user interface for motion modeling by direct demonstration. Our experiments expose and quantify the benefits of using stereo vision and avatars, and show that the use of avatars improve the quality of produced motions and the resemblance of replicated msotions; however, direct interaction in user-perspective leads to tasks executed in less time and to targets more accurately reached. These and additional tradeoffs are important for the effective design of avatar-based training systems. PMID:27045914

  19. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  20. Stereo-vision-based perception capabilities developed during the Robotics Collaborative Technology Alliances program

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo; Bajracharya, Max; Huertas, Andres; Howard, Andrew; Moghaddam, Baback; Brennan, Shane; Ansar, Adnan; Tang, Benyang; Turmon, Michael; Matthies, Larry

    2010-04-01

    The Robotics Collaborative Technology Alliances (RCTA) program, which ran from 2001 to 2009, was funded by the U.S. Army Research Laboratory and managed by General Dynamics Robotic Systems. The alliance brought together a team of government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles. Under RCTA, three technology areas critical to the development of future autonomous unmanned systems were addressed: advanced perception, intelligent control architectures and tactical behaviors, and human-robot interaction. The Jet Propulsion Laboratory (JPL) participated as a member for the entire program, working four tasks in the advanced perception technology area: stereo improvements, terrain classification, pedestrian detection in dynamic environments, and long range terrain classification. Under the stereo task, significant improvements were made to the quality of stereo range data used as a front end to the other three tasks. Under the terrain classification task, a multi-cue water detector was developed that fuses cues from color, texture, and stereo range data, and three standalone water detectors were developed based on sky reflections, object reflections (such as trees), and color variation. In addition, a multi-sensor mud detector was developed that fuses cues from color stereo and polarization sensors. Under the long range terrain classification task, a classifier was implemented that uses unsupervised and self-supervised learning of traversability to extend the classification of terrain over which the vehicle drives to the far-field. Under the pedestrian detection task, stereo vision was used to identify regions-of-interest in an image, classify those regions based on shape, and track detected pedestrians in three-dimensional world coordinates. To improve the detectability of partially occluded

  1. Lightweight camera head for robotic-based binocular stereo vision: an integrated engineering approach

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Parker, Graham A.

    1992-03-01

    This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end- effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small size envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation, and component inspection for the manufacturing industry.

  2. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  3. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  4. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  5. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  6. Artificial-vision stereo system as a source of visual information for preventing the collision of vehicles

    SciTech Connect

    Machtovoi, I.A.

    1994-10-01

    This paper explains the principle of automatically determining the position of extended and point objects in 2-D space of recognizing them by means of an artificial-vision stereo system from the measured coordinates of conjugate points in stereo pairs, and also analyzes methods of identifying these points.

  7. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269607

  8. A novel registration method for image-guided neurosurgery system based on stereo vision.

    PubMed

    An, Yong; Wang, Manning; Song, Zhijian

    2015-01-01

    This study presents a novel spatial registration method of Image-guided neurosurgery system (IGNS) based on stereo-vision. Images of the patient's head are captured by a video camera, which is calibrated and tracked by an optical tracking system. Then, a set of sparse facial data points are reconstructed from them by stereo vision in the patient space. Surface matching method is utilized to register the reconstructed sparse points and the facial surface reconstructed from preoperative images of the patient. Simulation experiments verified the feasibility of the proposed method. The proposed method it is a new low-cost and easy-to-use spatial registration method for IGNS, with good prospects for clinical application. PMID:26406100

  9. Augmented reality and stereo vision for remote scene characterization

    NASA Astrophysics Data System (ADS)

    Lawson, Shaun W.; Pretlove, John R. G.

    1999-11-01

    In this paper we present our progress in the research and development of an augmented reality (AR) system for the remote inspection of hazardous environments. It specifically addresses one particular application with which we are involved--that of improving the inspection of underground sewer pipes using robotic vehicles and 3D graphical overlays coupled with stereoscopic visual data. Traditional sewer inspection using a human operator and CCTV systems is a mature technology--though the task itself is difficult, subjective and prone to error. The work described here proposes not to replace the expert human inspector--but to enhance and increase the information that is available to him and to augment that information with other previously stored data. We describe our current system components which comprise a robotic stereo head device, a simulated sewer crawling vehicle and our AR system. We then go on to discuss the lengthy calibration procedures which are necessary in to align any graphical overlay information with live video data. Some experiments in determining alignment errors under head motion and some investigations into the use of a calibrated virtual cursor are then described.

  10. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  11. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  12. Novel method of calibration with restrictive constraints for stereo-vision system

    NASA Astrophysics Data System (ADS)

    Cui, Jiashan; Huo, Ju; Yang, Ming

    2016-05-01

    Regarding the calibration of a stereo vision measurement system, this paper puts forward a new bundle adjustment algorithm based on the stereo vision camera calibration method. Multiple-view geometric constraints and a bundle adjustment algorithm are used to optimize the inner and outer parameters of the camera accurately. A fixed relative constraint relationship between cameras is introduced. We have improved the normal equation construction process of the traditional bundle adjustment method, so that each iteration process occurs just outside the parameters of two images that are taken by a camera that has been optimized to better integrate two cameras bound together as one camera. The relationship between the fixed relative constraints can effectively increase the number of superfluous observations of the adjustment system and optimize higher accuracy while reducing the dimension of the normal matrix; it means that each iteration will reduce the time required. Simulation and actual experimental results show the superior performance of the proposed approach in terms of robustness and accuracy, and our approach also can be extended to stereo-vision system with more than two cameras.

  13. Single camera stereo vision coordinate measurement in parts pose recognization on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Chun-mei; Huang, Feng-shan; Wang, Xue-sha; Chen, Li

    2014-11-01

    In order to recognize parts' pose on Coordinate Measuring Machine (CMM) correctly and fast, based on the translation of CMM, A single camera stereo vision measurement method for feature points' 3D coordinate on the measured parts is proposed. According to the double cameras stereo vision principle, a image of the part to be measured is captured by A CCD camera, which is driven by CMM along its X or Y axis, on two different position correspondly. Thus, the part's single camera stereo vision measurement is realized with the proposed image matching method, which is based on the centroid offset of image edge, on two images of the same feature point on the part, and each feature point's 3D coordinate in the camera coordinate system can be obtained. The measuring system is set up, and the experiment is conducted. The feature point's coordinate measuring time is 1.818s, and the difference value, which is between feature points' 3D coordinate calculated with the experiment result and that measured by CMM in the machine coordinate system, is less than 0.3mm. This measuring result can meet parts' pose real-time recognization requirement on the intelligent CMM, and also show that the method proposed in this paper is feasible.

  14. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    SciTech Connect

    Wang, J.R.; Davis, T.A.; Lee, G.K.

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  15. Experimentation of structured light and stereo vision for underwater 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A. V.

    Current research on underwater 3D imaging methods is mainly addressing long range applications like seafloor mapping or surveys of archeological sites and shipwrecks. Recently, there is an increasing need for more accessible and precise close-range 3D acquisition technologies in some application fields like, for example, monitoring the growth of coral reefs or reconstructing underwater archaeological pieces that in most cases cannot be recovered from the seabed. This paper presents the first results of a research project that aims to investigate the possibility of using active optical techniques for the whole-field 3D reconstructions in an underwater environment. In this work we have tested an optical technique, frequently used for in air acquisition, based on the projection of structured lighting patterns acquired by a stereo vision system. We describe the experimental setup used for the underwater tests, which were conducted in a water tank with different turbidity conditions. The tests have evidenced that the quality of 3D reconstruction is acceptable even with high turbidity values, despite the heavy presence of scattering and absorption effects.

  16. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    NASA Astrophysics Data System (ADS)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  17. Plant phenotyping using multi-view stereo vision with structured lights

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuy Tuong; Slaughter, David C.; Maloof, Julin N.; Sinha, Neelima

    2016-05-01

    A multi-view stereo vision system for true 3D reconstruction, modeling and phenotyping of plants was created that successfully resolves many of the shortcomings of traditional camera-based 3D plant phenotyping systems. This novel system incorporates several features including: computer algorithms, including camera calibration, excessive-green based plant segmentation, semi-global stereo block matching, disparity bilateral filtering, 3D point cloud processing, and 3D feature extraction, and hardware consisting of a hemispherical superstructure designed to hold five stereo pairs of cameras and a custom designed structured light pattern illumination system. This system is nondestructive and can extract 3D features of whole plants modeled from multiple pairs of stereo images taken at different view angles. The study characterizes the systems phenotyping performance for 3D plant features: plant height, total leaf area, and total leaf shading area. For plants having specified leaf spacing and size, the algorithms used in our system yielded satisfactory experimental results and demonstrated the ability to study plant development where the same plants were repeatedly imaged and phenotyped over the time.

  18. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  19. Calibration for stereo vision system based on phase matching and bundle adjustment algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Dong, Chao

    2015-05-01

    Calibration for stereo vision system plays an important role in the field of machine vision applications. The existing accurate calibration methods are usually carried out by capturing a high-accuracy calibration target with the same size as the measurement view. In in-situ 3D measurement and in large field of view measurement, the extrinsic parameters of the system usually need to be calibrated in real-time. Furthermore, the large high-accuracy calibration target in the field is a big challenge for manufacturing. Therefore, an accurate and rapid calibration method in the in-situ measurement is needed. In this paper, a novel calibration method for stereo vision system is proposed based on phase-based matching method and the bundle adjustment algorithm. As the camera is usually mechanically locked once adjusted appropriately after calibrated in lab, the intrinsic parameters are usually stable. We emphasize on the extrinsic parameters calibration in the measurement field. Firstly, the matching method based on heterodyne multi-frequency phase-shifting technique is applied to find thousands of pairs of corresponding points between images of two cameras. The large amount of pairs of corresponding points can help improve the accuracy of the calibration. Then the method of bundle adjustment in photogrammetry is used to optimize the extrinsic parameters and the 3D coordinates of the measured objects. Finally, the quantity traceability is carried out to transform the optimized extrinsic parameters from the 3D metric coordinate system into Euclid coordinate system to obtain the ultimate optimal extrinsic parameters. Experiment results show that the procedure of calibration takes less than 3 s. And, based on the stereo vision system calibrated by the proposed method, the measurement RMS (Root Mean Square) error can reach 0.025 mm when measuring the calibrated gauge with nominal length of 999.576 mm.

  20. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects. PMID:26970109

  1. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  2. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  3. Characterization of Stereo Vision Performance for Roving at the Lunar Poles

    NASA Technical Reports Server (NTRS)

    Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry

    2016-01-01

    Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.

  4. A novel virtual four-ocular stereo vision system based on single camera for measuring insect motion parameters

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Zhang, Guangjun; Chen, Dazhi

    2005-11-01

    A novel virtual four-ocular stereo measurement system based on single high speed camera is proposed for measuring double beating wings of a high speed flapping insect. The principle of virtual monocular system consisting of a few planar mirrors and a single high speed camera is introduced. The stereo vision measurement principle based on optic triangulation is explained. The wing kinematics parameters are measured. Results show that this virtual stereo system not only decreases system cost extremely but also is effective to insect motion measurement.

  5. Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments

    NASA Astrophysics Data System (ADS)

    Griessbach, D. G.; Baumbach, D. B.; Boerner, A. B.; Zuev, S. Z.

    2013-11-01

    Accurate knowledge of position and orientation is a prerequisite for many applications regarding unmanned navigation, mapping, or environmental modelling. GPS-aided inertial navigation is the preferred solution for outdoor applications. Nevertheless a similar solution for navigation tasks in difficult environments with erroneous or no GPS-data is needed. Therefore a stereo vision aided inertial navigation system is presented which is capable of providing real-time local navigation for indoor applications. A method is described to reconstruct the ego motion of a stereo camera system aided by inertial data. This, in turn, is used to constrain the inertial sensor drift. The optical information is derived from natural landmarks, extracted and tracked over consequent stereo image pairs. Using inertial data for feature tracking effectively reduces computational costs and at the same time increases the reliability due to constrained search areas. Mismatched features, e.g. at repetitive structures typical for indoor environments are avoided. An Integrated Positioning System (IPS) was deployed and tested on an indoor navigation task. IPS was evaluated for accuracy, robustness, and repeatability in a common office environment. In combination with a dense disparity map, derived from the navigation cameras, a high density point cloud is generated to show the capability of the navigation algorithm.

  6. Application of stereo vision to three-dimensional deformation analyses in fracture experiments

    SciTech Connect

    Luo, P.F. . Dept. of Mechanical Engineering); Chao, Y.J.; Sutton, M.A. . Dept. of Mechanical Engineering)

    1994-03-01

    Based on a pinhole camera model, camera model equations that account for the radial lens distortion are used to map three-dimensional (3-D) world coordinates to two-dimensional (2-D) computer image coordinates. Using two cameras to form a stereo vision, the 3-D information can be obtained. It is demonstrated that such stereo imaging systems can be used to measure the 3-D displacement field around the crack tip of a fracture specimen. To compare with the available 2-D theory of fracture mechanics, the measured displacement fields expressed in the world coordinates are converted, through coordinate transformations, to the displacement fields expressed in specimen crack tip coordinates. By using a smoothing technique, the in-plane displacement components are smoothed and the total strains are obtained. Rigid body motion is eliminated from the smoothed in-plane displacement components and unsmoothed out-of-plane displacement. Compared with the theoretical elastic-plastic field at a crack tip, the results appear to be consistent with expected trends, which indicates that the stereo imaging system is viable tool for the 3-D deformation analysis of fracture specimens.

  7. Stereo and regioselectivity in ''Activated'' tritium reactions

    SciTech Connect

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the ..cap alpha..-amino C-H position mostly with retention of configuration. Labeling predominated at the single ..beta.. C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the ..cap alpha..-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs.

  8. Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads

    NASA Technical Reports Server (NTRS)

    DiPaolo, Daniel

    2003-01-01

    The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.

  9. Stereo vision-based pedestrian detection using multiple features for automotive application

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Hee; Kim, Dongyoung

    2015-12-01

    In this paper, we propose a stereo vision-based pedestrian detection using multiple features for automotive application. The disparity map from stereo vision system and multiple features are utilized to enhance the pedestrian detection performance. Because the disparity map offers us 3D information, which enable to detect obstacles easily and reduce the overall detection time by removing unnecessary backgrounds. The road feature is extracted from the v-disparity map calculated by the disparity map. The road feature is a decision criterion to determine the presence or absence of obstacles on the road. The obstacle detection is performed by comparing the road feature with all columns in the disparity. The result of obstacle detection is segmented by the bird's-eye-view mapping to separate the obstacle area which has multiple objects into single obstacle area. The histogram-based clustering is performed in the bird's-eye-view map. Each segmented result is verified by the classifier with the training model. To enhance the pedestrian recognition performance, multiple features such as HOG, CSS, symmetry features are utilized. In particular, the symmetry feature is proper to represent the pedestrian standing or walking. The block-based symmetry feature is utilized to minimize the type of image and the best feature among the three symmetry features of H-S-V image is selected as the symmetry feature in each pixel. ETH database is utilized to verify our pedestrian detection algorithm.

  10. Extrinsic parameter calibration of stereo vision sensors using spot laser projector.

    PubMed

    Liu, Zhen; Yin, Yang; Liu, Shaopeng; Chen, Xu

    2016-09-01

    The on-site calibration of stereo vision sensors plays an important role in the measurement field. Image coordinate extraction of feature points of existing targets is difficult under complex light conditions in outdoor environments, such as strong light and backlight. This paper proposes an on-site calibration method for stereo vision sensors based on a spot laser projector for solving the above-mentioned problem. The proposed method is used to mediate the laser spots on the parallel planes for the purpose of calibrating the coordinate transformation matrix between two cameras. The optimal solution of a coordinate transformation matrix is then solved by nonlinear optimization. Simulation experiments and physical experiments are conducted to validate the performance of the proposed method. Under the condition that the field of view is approximately 400  mm×300  mm, the proposed method can reach a calibration accuracy of 0.02 mm. This accuracy value is comparable to that of the method using a planar target. PMID:27607287

  11. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  12. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  13. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  14. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  15. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  16. Fast-camera calibration of stereo vision system using BP neural networks

    NASA Astrophysics Data System (ADS)

    Cai, Huimin; Li, Kejie; Liu, Meilian; Song, Ping

    2010-10-01

    In position measurements by far-range photogrammetry, the scale between object and image has to be calibrated. It means to get the parameters of the perspective projection matrix. Because the image sensor of fast-camera is CMOS, there are many uncertain distortion factors. It is hard to describe the scale between object and image for the traditional calibration based on the mathematical model. In this paper, a new method for calibrating stereo vision systems with neural networks is described. A linear method is used for 3D position estimation and its error is corrected by neural networks. Compared with DLT (Direct Linear Transformation) and direct mapping by neural networks, the accuracy is improved. We have used this method in the drop point measurement of an object in high speed successfully.

  17. Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision

    SciTech Connect

    Ren Zhiguo; Liao Jiarui; Cai Lilong

    2010-04-01

    We present an effective method for the accurate three-dimensional (3D) measurement of small industrial parts under a complicated noisy background, based on stereo vision. To effectively extract the nonlinear features of desired curves of the measured parts in the images, a strategy from coarse to fine extraction is employed, based on a virtual motion control system. By using the multiscale decomposition of gray images and virtual beam chains, the nonlinear features can be accurately extracted. By analyzing the generation of geometric errors, the refined feature points of the desired curves are extracted. Then the 3D structure of the measured parts can be accurately reconstructed and measured with least squares errors. Experimental results show that the presented method can accurately measure industrial parts that are represented by various line segments and curves.

  18. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  19. Occupancy grid mapping in urban environments from a moving on-board stereo-vision system.

    PubMed

    Li, You; Ruichek, Yassine

    2014-01-01

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform "SeTCar" in urban environments. PMID:24932866

  20. Occupancy Grid Mapping in Urban Environments from a Moving On-Board Stereo-Vision System

    PubMed Central

    Li, You; Ruichek, Yassine

    2014-01-01

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform “SeTCar” in urban environments. PMID:24932866

  1. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  2. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. PMID:26924646

  3. Stereo-vision based 3D modeling for unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Jasiobedzki, Piotr

    2007-04-01

    Instant Scene Modeler (iSM) is a vision system for generating calibrated photo-realistic 3D models of unknown environments quickly using stereo image sequences. Equipped with iSM, Unmanned Ground Vehicles (UGVs) can capture stereo images and create 3D models to be sent back to the base station, while they explore unknown environments. Rapid access to 3D models will increase the operator situational awareness and allow better mission planning and execution, as the models can be visualized from different views and used for relative measurements. Current military operations of UGVs in urban warfare threats involve the operator hand-sketching the environment from live video feed. iSM eliminates the need for an additional operator as the 3D model is generated automatically. The photo-realism of the models enhances the situational awareness of the mission and the models can also be used for change detection. iSM has been tested on our autonomous vehicle to create photo-realistic 3D models while the rover traverses in unknown environments. Moreover, a proof-of-concept iSM payload has been mounted on an iRobot PackBot with Wayfarer technology, which is equipped with autonomous urban reconnaissance capabilities. The Wayfarer PackBot UGV uses wheel odometry for localization and builds 2D occupancy grid maps from a laser sensor. While the UGV is following walls and avoiding obstacles, iSM captures and processes images to create photo-realistic 3D models. Experimental results show that iSM can complement Wayfarer PackBot's autonomous navigation in two ways. The photo-realistic 3D models provide better situational awareness than 2D grid maps. Moreover, iSM also recovers the camera motion, also known as the visual odometry. As wheel odometry error grows over time, this can help improve the wheel odometry for better localization.

  4. Design issues for stereo vision systems used on tele-operated robotic platforms

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-02-01

    The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.

  5. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  6. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  7. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  8. Accurate calibration of a stereo-vision system in image-guided radiotherapy.

    PubMed

    Liu, Dezhi; Li, Shidong

    2006-11-01

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system. PMID:17153416

  9. Accurate calibration of a stereo-vision system in image-guided radiotherapy

    SciTech Connect

    Liu Dezhi; Li Shidong

    2006-11-15

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system.

  10. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target.

    PubMed

    Wei, Zhenzhong; Zhao, Kai

    2016-01-01

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm². PMID:27420063

  11. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers

    PubMed Central

    El-Haddad, Mohamed T.; Tao, Yuankai K.

    2015-01-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm2 field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology. PMID:26309764

  12. Stereo-vision framework for autonomous vehicle guidance and collision avoidance

    NASA Astrophysics Data System (ADS)

    Scott, Douglas A.

    2003-08-01

    During a pre-programmed course to a particular destination, an autonomous vehicle may potentially encounter environments that are unknown at the time of operation. Some regions may contain objects or vehicles that were not anticipated during the mission-planning phase. Often user-intervention is not possible or desirable under these circumstances. Thus it is required for the onboard navigation system to automatically make short-term adjustments to the flight plan and to apply the necessary course corrections. A suitable path is visually navigated through the environment to reliably avoid obstacles without significant deviations from the original course. This paper describes a general low-cost stereo-vision sensor framework, for passively estimating the range-map between a forward-looking autonomous vehicle and its environment. Typical vehicles may be either unmanned ground or airborne vehicles. The range-map image describes a relative distance from the vehicle to the observed environment and contains information that could be used to compute a navigable flight plan, and also visual and geometric detail about the environment for other onboard processes or future missions. Aspects relating to information flow through the framework are discussed, along with issues such as robustness, implementation and other advantages and disadvantages of the framework. An outline of the physical structure of the system is presented and an overview of the algorithms and applications of the framework are given.

  13. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    NASA Astrophysics Data System (ADS)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  14. Field calibration of binocular stereo vision based on fast reconstruction of 3D control field

    NASA Astrophysics Data System (ADS)

    Zhang, Haijun; Liu, Changjie; Fu, Luhua; Guo, Yin

    2015-08-01

    Construction of high-speed railway in China has entered a period of rapid growth. To accurately and quickly obtain the dynamic envelope curve of high-speed vehicle is an important guarantee for safe driving. The measuring system is based on binocular stereo vision. Considering the difficulties in field calibration such as environmental changes and time limits, carried out a field calibration method based on fast reconstruction of three-dimensional control field. With the rapid assembly of pre-calibrated three-dimensional control field, whose coordinate accuracy is guaranteed by manufacture accuracy and calibrated by V-STARS, two cameras take a quick shot of it at the same time. The field calibration parameters are then solved by the method combining linear solution with nonlinear optimization. Experimental results showed that the measurement accuracy can reach up to +/- 0.5mm, and more importantly, in the premise of guaranteeing accuracy, the speed of the calibration and the portability of the devices have been improved considerably.

  15. Three-dimensional infrared imaging method based on binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Ma, Haining; Zeng, Debing

    2015-10-01

    The infrared imaging technique is characterized as high-precision and noncontact and provides the temperature information of the object, leading to its broad application in civil and military fields. Currently, the research on infrared thermography is mainly focused on two-dimensional images, lacking the information in depth orientation. To extend the range of application and provide spatial information, a three-dimensional (3-D) infrared imaging system based on binocular stereo vision is presented. The system is composed of two visible-light cameras, an infrared camera, and a digital projector. The proposed system fuses the metric information and the infrared information to acquire the 3-D surface temperature distribution by combining the 3-D reconstruction technique with infrared thermography. The registration of the metric information and the infrared image is accomplished according to the properties of three-view geometry. Experiments have been undertaken with a storage box, a rudder model, and a person's stretching arm, respectively, and the results demonstrated the good performance of the proposed method.

  16. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target

    PubMed Central

    Wei, Zhenzhong; Zhao, Kai

    2016-01-01

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm2. PMID:27420063

  17. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers.

    PubMed

    El-Haddad, Mohamed T; Tao, Yuankai K

    2015-08-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm(2) field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology. PMID:26309764

  18. ANN implementation of stereo vision using a multi-layer feedback architecture

    SciTech Connect

    Mousavi, M.S.; Schalkoff, R.J.

    1994-08-01

    An Artificial Neural Network (ANN), consisting of three interacting neural modules, is developed for stereo vision. The first module locates sharp intensity changes in each of the images. The edge detection process is basically a bottom-up, one-to-one input-output mapping process with a network structure which is time-invariant. In the second module, a multilayered connectionist network is used to extract the features or primitives for disparity analysis (matching). A similarity measure is defined and computed for each pair of primitive matches and is passed to the third module. The third module solves the difficult correspondence problem by mapping it into a constraint satisfaction problem. Intra- and inter-scanline constraints are used in order to restrict possible feature matches. The inter-scanline constraints are implemented via interconnections of a three-dimensional neural network. The overall process is iterative. At the end of each network iteration, the output of the third constraint satisfaction module feeds back updated information on matching pairs as well as their corresponding location in the left and right images to the input of the second module. This iterative process continues until the output of the third module converges to an stable state. Once the matching process is completed, the disparity can be calculated, and camera calibration parameters can be used to find the three-dimensional location of object points. Results using this computational architecture are shown. 26 refs.

  19. An Automatic 3d Reconstruction Method Based on Multi-View Stereo Vision for the Mogao Grottoes

    NASA Astrophysics Data System (ADS)

    Xiong, J.; Zhong, S.; Zheng, L.

    2015-05-01

    This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.

  20. Vision by Man and Machine.

    ERIC Educational Resources Information Center

    Poggio, Tomaso

    1984-01-01

    Studies of stereo vision guide research on how animals see and how computers might accomplish this human activity. Discusses a sequence of algorithms to first extract information from visual images and then to calculate the depths of objects in the three-dimensional world, concentrating on stereopsis (stereo vision). (JN)

  1. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%. PMID:26193503

  2. Stereo-vision system for finger tracking in breast self-examination

    NASA Astrophysics Data System (ADS)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.

  3. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  4. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  5. Research on dimensional measurement method of mechanical parts based on stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Zhuoyun; Zhang, Xuewu; Shen, Haodong; Zhang, Zhuo; Fan, Xinnan

    2015-10-01

    This paper researches on the key and difficult issues in stereo measurement deeply, including camera calibration, feature extraction, stereo matching and depth computation, and then put forwards a novel matching method combined the seed region growing and SIFT feature matching. It first uses SIFT characteristics as matching criteria for feature points matching, and then takes the feature points as seed points for region growing to get better depth information. Experiments are conducted to validate the efficiency of the proposed method using standard matching graphs, and then the proposed method is applied to dimensional measurement of mechanical parts. The results show that the measurement error is less than 0.5mm for medium sized mechanical parts, which can meet the demands of precision measurement.

  6. Stereo vision-based depth of field rendering on a mobile device

    NASA Astrophysics Data System (ADS)

    Wang, Qiaosong; Yu, Zhan; Rasmussen, Christopher; Yu, Jingyi

    2014-03-01

    The depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic value. However, capturing and displaying dynamic DoF effect were until recently a quality unique to expensive and bulky movie cameras. A computational approach to generate realistic DoF effects for mobile devices such as tablets is proposed. We first calibrate the rear-facing stereo cameras and rectify the stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next, we generate a synthetic light field by warping the raw color image to nearby viewpoints, according to the corresponding values in the upsampled high-resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only, with no additional hardware or software required. The system has been examined in a variety of environments with satisfactory results, according to the subjective evaluation tests.

  7. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools

    PubMed Central

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-01-01

    Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790

  8. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  9. People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments

    NASA Astrophysics Data System (ADS)

    Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.

    People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.

  10. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    NASA Astrophysics Data System (ADS)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  11. Vibration studies of simply supported beam based on binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Liang, Chaojia; Deng, Huaxia; Zhang, Jin; Yu, Liandong

    2015-02-01

    That sensors are used to collect experimental data is still prevalent nowadays. A common way to analyze the mode of vibration of a structure, lots of sensors may required to be installed on the structure in order to ensure completeness of data. However, if these data can be obtained based on machine vision, the effects caused by sensors on the beam would be removed. A new kind of non-contact method will be used to measure the mode of vibration of simply supported beam. In this paper, the basic theory of the simply supported beam and machine vision will be introduced. It is different from traditional way that is based on a large number of sensors to collect experimental data. Two cameras record the vibration process of simply supported beam while the beam is vibrating caused by an exciter. After those images that have recorded the vibration process of simply supported beam are processed, calibration and registration included, those data collected by sensors also are reconstructed by traditional modal test method for comparison. Through comparing the machine vision method and sensor based method, errors caused by the process of reconstruction might be analyzed. The first order modal vibration modes by using two ways and combining with finite element method to produce can also be analyzed what their differences are.

  12. Instant Stereoscopic Tomography of Active Regions with STEREO/EUVI

    NASA Astrophysics Data System (ADS)

    Aschwanden, M. J.; Wuelser, J.; Nitta, N.; Lemen, J.; Sandman, A.

    2008-12-01

    We develop a novel 3D reconstruction method of the coronal plasma of an active region by combining stereoscopic triangulation of loops with density and temperature modeling of coronal loops with a filling factor equivalent to tomographic volume rendering. Because this method requires only a stereoscopic image pair in multiple temperature filters, which are sampled within ~1 minute with the recent STEREO/EUVI instrument, this method is about 4 orders of magnitude faster than conventional solar rotation-based tomography. We reconstruct the 3D density and temperature distribution of active region NOAA 10955 by stereoscopic triangulation of 70 loops, which are used as a skeleton for a 3D field interpolation of some 7000 loop components, leading to a 3D model that reproduces the observed fluxes in each stereosocpic image pair with an accuracy of a few percent (of the average flux) in each pixel. With the stereoscopic tomography we infer also a differential emission measure (DEM) distribution over the entire temperature range of T~0.01-10 MK, with predictions for the transition region and hotter corona in soft X-rays. The tomographic 3D model provides also large statistics of physical parameters. We find that the EUV loops with apex temperatures of T = 1- 3 MK tend to be super-hydrostatic, while hotter loops with T = 4-7 MK are near-hydrostatic. The new 3D reconstruction model is fully independent of any magnetic field data and is promising for future tests of theoretical magnetic field models and coronal heating models.

  13. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  14. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    NASA Astrophysics Data System (ADS)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  15. Vision Loss With Sexual Activity.

    PubMed

    Lee, Michele D; Odel, Jeffrey G; Rudich, Danielle S; Ritch, Robert

    2016-01-01

    A 51-year-old white man presented with multiple episodes of transient painless unilateral vision loss precipitated by sexual intercourse. Examination was significant for closed angles bilaterally. His visual symptoms completely resolved following treatment with laser peripheral iridotomies. PMID:25265010

  16. Stream Interactions in STEREO and THEMIS Data and Resulting Geomagnetic Activity

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; St Cyr, O. C.; Sibeck, D. G.; Zhang, H.; Jian, L.; Russell, C. T.; Luhmann, J. G.

    2009-12-01

    During this unusual solar minimum the decrease in solar activity has resulted in less geomagnetic activity. The observed activity, which ultimately arises from changes in the solar wind, has been from stream interaction regions (SIRs), shocks, and a few interplanetary coronal mass ejections (ICMEs). Stream interactions and shocks are identified in STEREO PLASTIC and ACE data and CMEs are identified in STEREO SECCHI. These events are studied in THEMIS data when the spacecraft are in dayside configuration. The propagation of these structures to the magnetopause, the resulting magnetospheric response, and any storm and substorm activity is discussed.

  17. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  18. Generic motion platform for active vision

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.; Vincze, Markus

    1996-10-01

    The term 'active vision' was first used by Bajcsy at a NATO workshop in 1982 to describe an emerging field of robot vision which departed sharply from traditional paradigms of image understanding and machine vision. The new approach embeds a moving camera platform as an in-the-loop component of robotic navigation or hand-eye coordination. Visually served steering of the focus of attention supercedes the traditional functions of recognition and gaging. Custom active vision platforms soon proliferated in research laboratories in Europe and North America. In 1990 the National Science Foundation funded the design of a common platform to promote cooperation and reduce cost in active vision research. This paper describes the resulting platform. The design was driven by payload requirements for binocular motorized C-mount lenses on a platform whose performance and articulation emulate those of the human eye- head system. The result was a 4-DOF mechanisms driven by servo controlled DC brush motors. A crossbeam supports two independent worm-gear driven camera vergence mounts at speeds up to 1,000 degrees per second over a range of +/- 90 degrees from dead ahead. This crossbeam is supported by a pan-tilt mount whose horizontal axis intersects the vergence axes for translation-free camera rotation about these axes at speeds up to 500 degrees per second.

  19. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects. PMID:15052484

  20. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  1. Identification of active faults in enlarged stereo models of Skylab S-190B photographs

    NASA Technical Reports Server (NTRS)

    Merifield, P. M.

    1983-01-01

    Most of the physiographic indicators of recent movement known to be present along the Indio Hills segment of the San Andreas fault zone can be identified on enlarged Skylab S-190B stereo photographs. These include offset streams, beheaded streams, offset fans, shutter ridges, linear valleys, scarps and vegetation anomalies. Where physiographic indicators of recent movement are present, the S-190B system affords the necessary resolution and stereoscopy for distinguishing activate from inactive faults.

  2. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  3. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  4. Robust active binocular vision through intrinsically motivated learning

    PubMed Central

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E.; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  5. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  6. Stream interactions and CMEs in STEREO and THEMIS data and resulting geomagnetic activity

    NASA Astrophysics Data System (ADS)

    Mays, Leila; St. Cyr, Chris; Sibeck, David

    During this solar minimum the decrease in solar activity has resulted in less geomagnetic activity. The observed activity, which ultimately arises from changes in the solar wind, has been from stream interaction regions (SIRs), shocks, and some interplanetary coronal mass ejections (ICMEs). A statistical study of stream interactions and CME events from January 2007 to December 2009 which result in storm and substorm activity is conducted. Stream interactions and shocks are identified in STEREO PLASTIC, ACE, and WIND data and CMEs are identified in the STEREO SECCHI coronagraphs. CME evolution in the lower corona and properties such as acceleration, speed and width are determined along with in-situ plasma data for ICMEs. The propagation of these structures to the magnetopause is studied using THEMIS data when the spacecraft are in dayside configuration. Aspects include the timing to the magnetopause boundary, magnetopause motion, magnetosheath properties, and the strength and duration of geomagnetic activity. The interplanetary propagation of CME events that were predicted to be Earth-directed but did not produce geomagnetic activity are also considered.

  7. Improving realtime predictions of magnetospheric activities using STEREO Space Weather Beacon

    NASA Astrophysics Data System (ADS)

    Bala, R.; Reiff, P. H.

    2011-12-01

    The Rice neural network models of geomagnetic activity indices Kp, Dst and AE (available from \\url{http://mms.rice.edu/realtime/forecast.html}), driven by the ACE solar wind data, have been actively running in near-realtime mode to provide short-term predictions of magnetospheric activities; subscribers to our ``spacalrt" system receive email alerts and notices of space weather based on key discriminator levels. Active structures that are likely to erupt on the sun and resulting in solar flares and/or Coronal Mass Ejections (CMEs) are now being well imaged by instruments aboard STEREO, which also provides multipoint, realtime and continuous information of the solar wind, interplanetary magnetic field, solar energetic particles through its Space Weather Beacon IMPACT and PLASTIC. The spacecraft lagging Earth (STEREO-B) and being ahead in the Parker spiral, is well suited to provide longer lead times to predictions of any common measures of geoeffectiveness resulting from solar wind-magnetospheric interactions such as Kp, Dst and AE indices. As our models are constantly evolving, our desire to drive them by indulging these advanced instruments is to provide longer lead times. Furthermore, this paper also investigates the geoeffectiveness of predicting CME-driven storms.

  8. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  9. Deep vision: an in-trawl stereo camera makes a step forward in monitoring the pelagic community.

    PubMed

    Underwood, Melanie J; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics. PMID:25393121

  10. Deep Vision: An In-Trawl Stereo Camera Makes a Step Forward in Monitoring the Pelagic Community

    PubMed Central

    Underwood, Melanie J.; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics. PMID:25393121

  11. DECONSTRUCTING ACTIVE REGION AR10961 USING STEREO, HINODE, TRACE, AND SOHO

    SciTech Connect

    Noglik, Jane B.; Walsh, Robert W.; Marsh, M. S.; Maclean, Rhona C.

    2009-10-01

    Active region 10961 was observed over a five-day period (2007 July 2-6) by instrumentation on-board STEREO, Hinode, TRACE, and SOHO. As it progressed from Sun's center to the solar limb, a comprehensive analysis of the extreme ultraviolet, X-ray, and magnetic field data reveals clearly observable changes in the global nature of the region. Temperature analyses undertaken using STEREO Extreme Ultraviolet Imager double filter ratios and X-ray imaging telescope single and combined filter ratios demonstrate an overall cooling of the region from between 1.6-3.0 MK to 1.0-2.0 MK over the five days. Similarly, Hinode Extreme Ultraviolet Imaging Spectrograph density measurements show a corresponding increase in density of 27%. Moss, cool (1 MK) outer loop areas, and hotter core loop regions were examined and compared with potential magnetic field extrapolations from SOHO Michelson Doppler Imager magnetogram data. In particular, it was found that the potential field model was able to predict the structure of the hotter X-ray loops and that the larger cool loops seen in 171 A images appeared to follow the separatrix surfaces. The reasons behind the high-density moss regions only observed on one side of the active region are examined further.

  12. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  13. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum.

    PubMed

    Li, Roger W; Tran, Truyet T; Craven, Ashley P; Leung, Tsz-Wing; Chat, Sandy W; Levi, Dennis M

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential 'cross-talk' among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes 'beyond-the-plateau'. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178

  14. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum

    PubMed Central

    Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178

  15. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold K. P.

    1994-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.

  16. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  17. Field-sequential stereo television

    NASA Technical Reports Server (NTRS)

    Perry, W. E.

    1974-01-01

    System includes viewing devices that provide low interference to normal vision. It provides stereo display observable from broader area. Left and right video cameras are focused on object. Output signals from cameras are time provided by each camera. Multiplexed signal, fed to standard television monitor, displays left and right images of object.

  18. Stereo images from space

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco

    2008-02-01

    The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D

  19. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  20. STEREO Observing AR903

    NASA Technical Reports Server (NTRS)

    2006-01-01

    A close up of loops in a magnetic active region. These loops, observed by STEREO's SECCHI/EUVI telescope, are at a million degrees K. This powerful active region, AR903, observed here on Dec. 4, 2006, produced a series of intense flares, particle storms, and coronal mass ejections over the next few days.

  1. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  2. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  3. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  4. Teacher Activism: Enacting a Vision for Social Justice

    ERIC Educational Resources Information Center

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  5. Robot Vision Library

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  6. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold P.; Caplinger, Michael

    1993-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Present techniques, however, focus on one visual cue, such as shading or binocular stereo, and produce results that are either not very accurate in an absolute sense or provide information only at few points on the surface. We plan to integrate shape from shading, binocular stereo and photometric stereo to yield a robust system for recovering detailed surface shape and surface reflectance information. Such a system will be useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface. The work will be carried out on a popular computing platform so that it will be easily accessible to other workers.

  7. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold P.; Caplinger, Michael

    1992-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Present techniques, however, focus on one visual cue, such as shading or binocular stereo, and produce results that are either not very accurate in an absolute sense or provide information only at few points on the surface. We plan to integrate shape from shading, binocular stereo and photometric stereo to yield a robust system for recovering detailed surface shape and surface reflectance information. Such a system will be useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface. The work will be carried out on a popular computing platform so that it will be easily accessible to other workers.

  8. Using perturbations to identify the brain circuits underlying active vision

    PubMed Central

    Wurtz, Robert H.

    2015-01-01

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision—the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized. PMID:26240420

  9. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  10. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  11. Sequential digital elevation models of active lava flows from ground-based stereo time-lapse imagery

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.

    2014-11-01

    We describe a framework for deriving sequences of digital elevation models (DEMs) for the analysis of active lava flows using oblique stereo-pair time-lapse imagery. A photo-based technique was favoured over laser-based alternatives due to low equipment cost, high portability and capability for network expansion, with images of advancing flows captured by digital SLR cameras over durations of up to several hours. However, under typical field scale scenarios, relative camera orientations cannot be rigidly maintained (e.g. through the use of a stereo bar), preventing the use of standard stereo time-lapse processing software. Thus, we trial semi-automated DEM-sequence workflows capable of handling the small camera motions, variable image quality and restricted photogrammetric control that result from the practicalities of data collection at remote and hazardous sites. The image processing workflows implemented either link separate close-range photogrammetry and traditional stereo-matching software, or are integrated in a single software package based on structure-from-motion (SfM). We apply these techniques in contrasting case studies from Kilauea volcano, Hawaii and Mount Etna, Sicily, which differ in scale, duration and image texture. On Kilauea, the advance direction of thin fluid lava lobes was difficult to forecast, preventing good distribution of control. Consequently, volume changes calculated through the different workflows differed by ∼10% for DEMs (over ∼30 m2) that were captured once a minute for 37 min. On Mt. Etna, more predictable advance (∼3 m h-1 for ∼3 h) of a thicker, more viscous lava allowed robust control to be deployed and volumetric change results were generally within 5% (over ∼500 m2). Overall, the integrated SfM software was more straightforward to use and, under favourable conditions, produced results comparable to those from the close-range photogrammetry pipeline. However, under conditions with limited options for photogrammetric

  12. Evaluation of active vision by a car's antifog headlamps

    NASA Astrophysics Data System (ADS)

    Barun, Vladimir V.; Levitin, Konstantin M.

    1996-10-01

    A special case of civilian active vision has been investigated here, namely, a vision system by car anti-fog headlamps. A method to estimate the light-engineering criteria for headlamp performances and simulate the operation of the system through a turbid medium, such as fog, is developed on the base of the analytical procedures of the radiative transfer theory. This method features in include the spaced light source and receiver of a driver's active vision system, the complicated azimuth-nonsymmetrical emissive pattern of the headlamps, and the fine angular dependence of the fog phase function near the backscattering direction. The final formulas are derived in an analytical form providing additional convenience and simplicity for the computations. The image contrast of a road object with arbitrary orientation, dimensions, and shape and its limiting visibility range are studied as a function of meteorological visibility range in fog as well as of various emissive pattern, mounting, and adjustment parameters of the headlamps. Optimization both light-engineering and geometrical characteristics of the headlamps is shown to be possible to enable the opportunity to enhance the visibility range and, hence, traffic safety.

  13. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  14. #7 Comparing STEREO, Simulated Helioseismic Images

    NASA Video Gallery

    Farside direct observations from STEREO (left) and simultaneous helioseismic reconstructions (right). Medium to large size active regions clearly appear on the helioseismic images, however the smal...

  15. Kiwi Forego Vision in the Guidance of Their Nocturnal Activities

    PubMed Central

    Martin, Graham R.; Wilson, Kerry-Jayne; Martin Wild, J.; Parsons, Stuart; Fabiana Kubke, M.; Corfield, Jeremy

    2007-01-01

    Background In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. Methodology/Principal Findings We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. Conclusions/Significance This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information. PMID:17332846

  16. Passive stereo range imaging for semi-autonomous land navigation

    NASA Technical Reports Server (NTRS)

    Matthies, Larry

    1992-01-01

    The paper examines the use of stereo vision (SV) for obstacle detection in semiautonomous land navigation. Feature-based and field-based paradigms for SV are reviewed. The paper presents stochastic models and simple, efficient stereo matching algorithms for the field-based approach and describes a near-real-time vision system using these algorithms. Experimental results illustrate aspects of the stochastic models and lead to the first semiautonomous traversals of natural terrain to use SV for obstacle detection.

  17. Active Vision in Marmosets: A Model System for Visual Neuroscience

    PubMed Central

    Reynolds, John H.; Miller, Cory T.

    2014-01-01

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms. PMID:24453311

  18. Vision Drives Correlated Activity without Patterned Spontaneous Activity in Developing Xenopus Retina

    PubMed Central

    Demas, James A.; Payne, Hannah; Cline, Hollis T.

    2011-01-01

    Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABAA receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. PMID:21312343

  19. Vision of the active limb impairs bimanual motor tracking in young and older adults

    PubMed Central

    Boisgontier, Matthieu P.; Van Halewyck, Florian; Corporaal, Sharissa H. A.; Willacker, Lina; Van Den Bergh, Veerle; Beets, Iseult A. M.; Levin, Oron; Swinnen, Stephan P.

    2014-01-01

    Despite the intensive investigation of bimanual coordination, it remains unclear how directing vision toward either limb influences performance, and whether this influence is affected by age. To examine these questions, we assessed the performance of young and older adults on a bimanual tracking task in which they matched motor-driven movements of their right hand (passive limb) with their left hand (active limb) according to in-phase and anti-phase patterns. Performance in six visual conditions involving central vision, and/or peripheral vision of the active and/or passive limb was compared to performance in a no vision condition. Results indicated that directing central vision to the active limb consistently impaired performance, with higher impairment in older than young adults. Conversely, directing central vision to the passive limb improved performance in young adults, but less consistently in older adults. In conditions involving central vision of one limb and peripheral vision of the other limb, similar effects were found to those for conditions involving central vision of one limb only. Peripheral vision alone resulted in similar or impaired performance compared to the no vision (NV) condition. These results indicate that the locus of visual attention is critical for bimanual motor control in young and older adults, with older adults being either more impaired or less able to benefit from a given visual condition. PMID:25452727

  20. A vision architecture for the extravehicular activity retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1992-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools, equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This report documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios will be discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  1. Viewing Stereo Drawings.

    ERIC Educational Resources Information Center

    Srinivasan, A. R.; Olson, Wilma K.

    1989-01-01

    The illustration of molecular structures by means of stereo pairs (to project three-dimensional views of two-dimensional representations) has been common practice in periodicals, slide presentations, and books. Describes the stereo triptych and multiply rotated stereo views with diagrams and discusses the use of them. (YP)

  2. Adaptive machine vision. Annual report

    SciTech Connect

    Stoner, W.W.; Brill, M.H.; Bergeron, D.W.

    1988-03-08

    The mission of the Strategic Defense Initiative is to develop defenses against threatening ballistic missiles. There are four distinct phases to the SDI defense; boost, post-boost, midcourse and terminal. In each of these phases, one or more machine-vision functions are required, such as pattern recognition, stereo image fusion, clutter rejection and discrimination. The SDI missions of coarse track, stereo track and discrimination are examined here from the point of view of a machine-vision system.

  3. Overview of NETL In-House Vision 21 Activities

    SciTech Connect

    Wildman, David J.

    2001-11-06

    The Office of Science and Technology at the National Energy Technology Laboratory, conducts research in support of Department of Energy's Fossil Energy Program. The research is funded through a variety of programs with each program focusing on a particular aspect of fossil energy. Since the Vision 21 Concept is based on the Advanced Power System Programs (Integrated Gasification Combined Cycle, Pressurized Fluid Bed, HIPPS, Advanced Turbine Systems, and Fuel Cells) it is not surprising that much of the research supports the Vision 21 Concept. The research is classified and presented according to ''enabling technologies'' and ''supporting technologies'' as defined by the Vision 21 Program. Enabling technology include fuel flexible gasification, fuel flexible combustion, hydrogen separation from fuel gas, advanced combustion systems, circulating fluid bed technology, and fuel cells. Supporting technologies include development of advanced materials, computer simulations, computation al fluid dynamics modeling, and advanced environmental control. An overview of Vision 21 related research is described, emphasizing recent accomplishments and capabilities.

  4. STEREO Mission Design

    NASA Technical Reports Server (NTRS)

    Dunham, David W.; Guzman, Jose J.; Sharer, Peter J.; Friessen, Henry D.

    2007-01-01

    STEREO (Solar-TErestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA). STEREO is the first mission to utilize phasing loops and multiple lunar flybys to alter the trajectories of more than one satellite. This paper describes the launch computation methodology, the launch constraints, and the resulting nine launch windows that were prepared for STEREO. More details are provided for the window in late October 2006 that was actually used.

  5. NASA's STEREO Mission

    NASA Technical Reports Server (NTRS)

    Kucera, T. A.

    2011-01-01

    NASA's STEREO (Solar TErrestrial RElations Observatory) mission consists of two nearly identical spacecraft hosting an array of in situ and imaging instruments for studying the sun and heliosphere. Launched in 2885 and in orbit about the Sun near 1 AU, the spacecraft are now swinging towards the farside of the sun. I will provide the latest information with regards to STEREO space weather data and also recent STEREO research.

  6. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  7. Active vision and sensor fusion for inspection of metallic surfaces

    NASA Astrophysics Data System (ADS)

    Puente Leon, Fernando; Beyerer, Juergen

    1997-09-01

    This paper deals with strategies for reliably obtaining the edges and the surface texture of metallic objects. Since illumination is a critical aspect regarding robustness and image quality, it is considered here as an active component of the image acquisition system. The performance of the methods presented is demonstrated -- among other examples -- with images of needles for blood sugar tests. Such objects show an optimized form consisting of several planar grinded surfaces delimited by sharp edges. To allow a reliable assessment of the quality of each surface, and a measurement of their edges, methods for fusing data obtained with different illumination constellations were developed. The fusion strategy is based on the minimization of suitable energy functions. First, an illumination-based segmentation of the object is performed. To obtain the boundaries of each surface, directional light-field illumination is used. By formulating suitable criteria, nearly binary images are selected by variation of the illumination direction. Hereafter, the surface edges are obtained by fusing the contours of the areas obtained before. Following, an optimally illuminated image is acquired for each surface of the object by varying the illumination direction. For this purpose, a criterion describing the quality of the surface texture has to be maximized. Finally, the images of all textured surfaces of the object are fused to an improved result, in which the whole object is contained with high contrast. Although the methods presented were designed for inspection of needles, they also perform robustly in other computer vision tasks where metallic objects have to be inspected.

  8. Simple method for calibrating omnidirectional stereo with multiple cameras

    NASA Astrophysics Data System (ADS)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  9. Search for correlations between HiRes stereo events and active galactic nuclei

    NASA Astrophysics Data System (ADS)

    High Resolution Fly'S Eye Collaboration; Abbasi, R. U.; Abu-Zayyad, T.; Allen, M.; Amman, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Benzvi, S. Y.; Bergman, D. R.; Blake, S. A.; Boyer, J. H.; Brusova, O. A.; Burt, G. W.; Cannon, C.; Cao, Z.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G.; Hüntemeyer, P.; Ivanov, D.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Rodriguez, D.; Sasaki, N.; Schnetzer, S. R.; Scott, L. M.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Wiencke, L. R.; Zech, A.; Zhang, X.; High Resolution Fly's Eye Collaboration

    2008-11-01

    We have searched for correlations between the pointing directions of ultrahigh energy cosmic rays observed by the High Resolution Fly's Eye experiment and active galactic nuclei (AGN) visible from its northern hemisphere location. No correlations, other than random correlations, have been found. We report our results using search parameters prescribed by the Pierre Auger collaboration. Using these parameters, the Auger collaboration concludes that a positive correlation exists for sources visible to their southern hemisphere location. We also describe results using two methods for determining the chance probability of correlations: one in which a hypothesis is formed from scanning one half of the data and tested on the second half, and another which involves a scan over the entire data set. The most significant correlation found occurred with a chance probability of 24%.

  10. Acceleration of Stereo Correlation in Verilog

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos

    2006-01-01

    To speed up vision processing in low speed, low power devices, embedding FPGA hardware is becoming an effective way to add processing capability. FPGAs offer the ability to flexibly add parallel and/or deeply pipelined computation to embedded processors without adding significantly to the mass and power requirements of an embedded system. This paper will discuss the JPL stereo vision system, and describe how a portion of that system was accelerated by using custom FPGA hardware to process the computationally intensive portions of JPL stereo. The architecture described takes full advantage of the ability of an FPGA to use many small computation elements in parallel. This resulted in a 16 times speedup in real hardware over using a simple linear processor to compute image correlation and disparity.

  11. Outflow activity near Hadriaca Patera, Mars: Fluid-tectonic interaction investigated with High Resolution Stereo Camera stereo data and finite element modeling

    NASA Astrophysics Data System (ADS)

    Musiol, S.; Cailleau, B.; Platz, T.; Kneissl, T.; Dumke, A.; Neukum, G.

    2011-08-01

    We investigate the formation of the outflow channels Dao and Niger Valles near the eastern rim of the Hellas impact basin, Mars. Methods used include image and topography analysis, surface age determination, and finite element modeling. Observations show that deep depressions, source regions for Dao and Niger Valles, are located in an area of shallow subsidence to the south and east of the volcano Hadriaca Patera. Cratering model ages allow for fluvial processes triggered by volcanic loading. Based on the observations, we develop a numerical model of volcanic loading on top of a poroelastic plate leading to flexure and fracturing of the lithosphere. Modeling results show that fracturing may occur up to a depth of about 6 km within an area of shallow subsidence, i.e., the moat surrounding the volcano. Depending on initial aquifer pressurization, groundwater could have reached the surface. Model discharges and channel morphometry suggest that the Dao Vallis channel never reached bankfull flow and that the wetted channel perimeter may have formed during multiple outflow events. The following scenario is proposed: (1) emplacement of a volcanic load on top of a confined, overpressurized aquifer in the early Hesperian, (2) fracturing around the load, possibly reactivated during various stages of volcanic activity, (3) channeling of groundwater to the surface along fractures and outflow channel formation during several events in the Hesperian, and (4) collapse, mass wasting and modification of depressions in the Amazonian.

  12. Joint motion model for local stereo video-matching method

    NASA Astrophysics Data System (ADS)

    Zhang, Jinglin; Bai, Cong; Nezan, Jean-Francois; Cousin, Jean-Gabriel

    2015-12-01

    As one branch of stereo matching, video stereo matching becomes more and more significant in computer vision applications. The conventional stereo matching methods for static images would cause flicker-frames and worse matching results. We propose a joint motion-based square step (JMSS) method for stereo video matching. The motion vector is introduced as one component in the support region building for the raw cost aggregation. Then we aggregate the raw cost along two directions in the support region. Finally, the winner-take-all strategy determines the best disparity under our hypothesis. Experimental results show that the JMSS method not only outperforms other state-of-the-art stereo matching methods on test sequences with abundant movements, but also performs well in some real-world scenes with fixed and moving stereo cameras, respectively, in particular under some extreme conditions of real stereo visions. Additionally, the proposed JMSS method can be implemented in real time, which is superior to other state-of-the-art methods. The time efficiency is also a very important consideration in our algorithm design.

  13. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  14. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  15. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  16. Data Fusion of LIDAR Into a Region Growing Stereo Algorithm

    NASA Astrophysics Data System (ADS)

    Veitch-Michaelis, J.; Muller, J.-P.; Storey, J.; Walton, D.; Foster, M.

    2015-05-01

    Stereo vision and LIDAR continue to dominate standoff 3D measurement techniques in photogrammetry although the two techniques are normally used in competition. Stereo matching algorithms generate dense 3D data, but perform poorly on low-texture image features. LIDAR measurements are accurate, but imaging requires scanning and produces sparse point clouds. Clearly the two techniques are complementary, but recent attempts to improve stereo matching performance on low-texture surfaces using data fusion have focused on the use of time-of-flight cameras, with comparatively little work involving LIDAR. A low-level data fusion method is shown, involving a scanning LIDAR system and a stereo camera pair. By directly imaging the LIDAR laser spot during a scan, unique stereo correspondences are obtained. These correspondences are used to seed a regiongrowing stereo matcher until the whole image is matched. The iterative nature of the acquisition process minimises the number of LIDAR points needed. This method also enables simple calibration of stereo cameras without the need for targets and trivial coregistration between the stereo and LIDAR point clouds. Examples of this data fusion technique are provided for a variety of scenes.

  17. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  18. Intelligent robots and computer vision XII: Active vision and 3D methods; Proceedings of the Meeting, Boston, MA, Sept. 8, 9, 1993

    SciTech Connect

    Casasent, D.P.

    1993-01-01

    Topics addressed include active vision for intelligent robots, 3D vision methods, tracking in robotic and vision, visual servoing and egomotion in robotics, egomotion and time-sequential processing, and control and planning in robotics and vision. Particular attention is given to invariant in visual motion, generic target tracking using color, recognizing 3D articulated-line-drawing objects, range data acquisition from an encoded structured light pattern, and 3D edge orientation detection. Also discussed are acquisition of randomly moving objects by visual guidance, fundamental principles of robot vision, high-performance visual servoing for robot end-point control, a long sequence analysis of human motion using eigenvector decomposition, and sequential computer algorithms for printed circuit board inspection.

  19. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  20. STEREO Sun360 Teaser

    NASA Video Gallery

    For the past 4 years, the two STEREO spacecraft have been moving away from Earth and gaining a more complete picture of the sun. On Feb. 6, 2011, NASA will reveal the first ever images of the entir...

  1. Stereo Measurements from Satellites

    NASA Technical Reports Server (NTRS)

    Adler, R.

    1982-01-01

    The papers in this presentation include: 1) 'Stereographic Observations from Geosynchronous Satellites: An Important New Tool for the Atmospheric Sciences'; 2) 'Thunderstorm Cloud Top Ascent Rates Determined from Stereoscopic Satellite Observations'; 3) 'Artificial Stereo Presentation of Meteorological Data Fields'.

  2. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  3. Stereo: The challenges

    NASA Astrophysics Data System (ADS)

    Mueller, J. T.; Maldonado, H.; Driesman, A. S.

    2003-08-01

    The goal of the Solar-Terrestrial Relations Observatory (STEREO) mission is to advance understanding og the three-dimensional structure of the Sun's corona and especially the of the origin of coronal mass ejections (CMEs), their evolutions inthe interplanterary medium, and the dynamic couplingn between CMEs and the Earth environment, CMEs, the most energic eruptions on the Sun, are the primary cause of major geomagnetic storms and are believed to be responsible for the largest solar energetic particles events. They may also be a critical element in the operation of the solar dynamo, because they appear to remove dynamo-generated magnetic flux from the Sun. The STEREO mission will study CMEs and the Sun's coronal structure using two spacecraft orbiting the Sun, one drifting ahead of the Earth and one behind. STEREO will obtain simultaneous extreme ultraviolet and visible image pairs along with simultaneous measurements of fields and particles, at gradually increasing angular separations over the course of the mission. The STEREO spacecraft will be outfitted with two instrument suites and two instruments: IMPACT ( In situ Measurements of Particles and CME Transients); SECCHI (Sun-Earth Connection Coronal and Heliospheric Investigation); Plastic (Plasma and Suprathermal Ion Composition); and SWAVES (STEREO/WAVES). STEREO Phase C/D confirmation occured in March 2002; the dual observatories will be launched on a single Delta II from Cape Canaveral in November 2005. This presentation focuses on the goals and approach of the Mission.

  4. Computer vision: automating DEM generation of active lava flows and domes from photos

    NASA Astrophysics Data System (ADS)

    James, M. R.; Varley, N. R.; Tuffen, H.

    2012-12-01

    Accurate digital elevation models (DEMs) form fundamental data for assessing many volcanic processes. We present a photo-based approach developed within the computer vision community to produce DEMs from a consumer-grade digital camera and freely available software. Two case studies, based on the Volcán de Colima lava dome and the Puyehue Cordón-Caulle obsidian flow, highlight the advantages of the technique in terms of the minimal expertise required, the speed of data acquisition and the automated processing involved. The reconstruction procedure combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) and can generate dense 3D point clouds (millions of points) from multiple photographs of a scene taken from different positions. Processing is carried out by automated software (e.g. http://blog.neonascent.net/archives/bundler-photogrammetry-package/). SfM-MVS reconstructions are initally un-scaled and un-oriented so additional geo-referencing software has been developed. Although this step requires the presence of some control points, the SfM-MVS approach has significantly easier image acquisition and control requirements than traditional photogrammetry, facilitating its use in a broad range of difficult environments. At Colima, the lava dome surface was reconstructed from recent and archive images taken from light aircraft over flights (2007-2011). Scaling and geo-referencing was carried out using features identified in web-sourced ortho-imagery obtained as a basemap layer in ArcMap - no ground-based measurements were required. Average surface measurement densities are typically 10-40 points per m2. Over mean viewing distances of ~500-2500 m (for different surveys), RMS error on the control features is ~1.5 m. The derived DEMs (with 1-m grid resolution) are sufficient to quantify volumetric change, as well as to highlight the structural evolution of the upper surface of the dome following an explosion in June 2011. At Puyehue Cord

  5. Evaluation of dynamic programming among the existing stereo matching algorithms

    NASA Astrophysics Data System (ADS)

    Huat, Teo Chee; Manap, Nurulfajar bin Abd

    2015-05-01

    There are various types of existing stereo matching algorithms on image processing which applied on stereo vision images to get better results of disparity depth map. One of them is the dynamic programming method. On this research is to perform an evaluation on the performance between the dynamic programming with other existing method as comparison. The algorithm used on the dynamic programming is the global optimization which provides better process on stereo images like its accuracy and its computational efficiency compared to other existing stereo matching algorithms. The dynamic programming algorithm used on this research is the current method as its disparity estimates at a particular pixel and all the other pixels unlike the old methods which with scanline based of dynamic programming. There will be details on every existing methods presented on this paper with the comparison between the dynamic programming and the existing methods. This can propose the dynamic programming method to be used on many applications in image processing.

  6. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. PMID:26026255

  7. Digital stereoscopic photography using StereoData Maker

    NASA Astrophysics Data System (ADS)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  8. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  9. Multiview stereo and silhouette fusion via minimizing generalized reprojection error☆

    PubMed Central

    Li, Zhaoxin; Wang, Kuanquan; Jia, Wenyan; Chen, Hsin-Chen; Zuo, Wangmeng; Meng, Deyu; Sun, Mingui

    2014-01-01

    Accurate reconstruction of 3D geometrical shape from a set of calibrated 2D multiview images is an active yet challenging task in computer vision. The existing multiview stereo methods usually perform poorly in recovering deeply concave and thinly protruding structures, and suffer from several common problems like slow convergence, sensitivity to initial conditions, and high memory requirements. To address these issues, we propose a two-phase optimization method for generalized reprojection error minimization (TwGREM), where a generalized framework of reprojection error is proposed to integrate stereo and silhouette cues into a unified energy function. For the minimization of the function, we first introduce a convex relaxation on 3D volumetric grids which can be efficiently solved using variable splitting and Chambolle projection. Then, the resulting surface is parameterized as a triangle mesh and refined using surface evolution to obtain a high-quality 3D reconstruction. Our comparative experiments with several state-of-the-art methods show that the performance of TwGREM based 3D reconstruction is among the highest with respect to accuracy and efficiency, especially for data with smooth texture and sparsely sampled viewpoints. PMID:25558120

  10. Applications of artificial intelligence 1993: Machine vision and robotics; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    SciTech Connect

    Boyer, K.L.; Stark, L.

    1993-01-01

    Various levels of machine vision and robotics are addressed, including object recognition, image feature extraction, active vision, stereo and matching, range image acquisition and analysis, sensor models, motion and path planning, and software environments. Papers are presented on integration of geometric and nongeometric attributes for fast object recognition, a four-degree-of-freedom robot head for active computer vision, shape reconstruction from shading with perspective projection, fast extraction of planar surfaces from range images, and real-time reconstruction and rendering of three-dimensional occupancy maps.

  11. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  12. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity.

    PubMed

    Frost, William N; Wang, Jean; Brandon, Christopher J

    2007-05-15

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations. PMID:17306887

  13. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    NASA Astrophysics Data System (ADS)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  14. Stereo Imaging Tactical Helper

    NASA Technical Reports Server (NTRS)

    Toole, Nicholas T.

    2010-01-01

    The Stereo Imaging Tactical Helper (SITH) program displays left and right images in stereo using the display technology made available by the JADIS framework. An overlay of the surface described by the disparity map (generated from the left and right images) allows the map to be compared to the actual images. In addition, an interactive cursor, whose visual depth is controlled by the disparity map, is used to ensure the correlated surface matches the real surface. This enhances the ability of operations personnel to provide quality control for correlation results, as well as to greatly assist developers working on correlation improvements. While its primary purpose is as a quality control tool for inspecting correlation results, SITH is also straightforward for use as a basic stereo image viewer

  15. A stereo-rangefinder experience

    NASA Astrophysics Data System (ADS)

    Harker, G. S.

    Past experiences in attempting to incorporate stereo range finders in Army tank operations are discussed. The problems making the range finder impractical are presented, and it is suggested there is a lack of basic understanding of the stereo processes.

  16. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  17. STEREO Mission Design Implementation

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.; Dunham, David W.; Sharer, Peter J.; Hunt, Jack W.; Ray, J. Courtney; Shapiro, Hongxing S.; Ossing, Daniel A.; Eichstedt, John E.

    2007-01-01

    STEREO (Solar-TErrestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA) Science Mission Directorate Sun-Earth Connection theme. This paper describes the successful implementation (lunar swingby targeting) of the mission following the first phasing orbit to deployment into the heliocentric mission orbits following the two lunar swingbys. The STEREO Project had to make some interesting trajectory decisions in order to exploit opportunities to image a bright comet and an unusual lunar transit across the Sun.

  18. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  19. Morphing in stereo animation

    NASA Astrophysics Data System (ADS)

    Davis, James A.; McAllister, David F.

    1999-05-01

    There are several techniques that can be used to produce morphs of 3D objects. The traditional solution is to apply 3D algorithms that transform the shape and attributes of one object into those of another. The problems in 3D morphing include avoiding self-intersections during the morph, specification of corresponding regions in the source and target objects and the imposition of geometric constraints on the objects. At first glance, the application of well understood 2D morphic techniques to stereo imags would seem to be reasonable and much simpler alternative to the production of 3D models and the application of 3D morphing to those modes. While it is true that in certain cases the applicant of 2D linear morphing techniques to stereo images produces effective morphs, the use of this technique places very strict geometric constraints on the objects being morphed. When linear 2D morphic technique are applied to stereo images where the parallax encoded in the images is of utmost importance, they linearly interpolate points between the source and target images which interpolates the parallax, also. We examine the ramifications of this limitation and discus the geometric constraints under which stereo morphing is useful.

  20. Reduction of computational complexity in the image/video understanding systems with active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-10-01

    The vision system evolved not only as a recognition system, but also as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it became a component of prediction function, allowing creation of environmental models and activity planning. Fast information processing and decision making is vital for any living creature, and requires reduction of informational and computational complexities. The brain achieves this goal using symbolic coding, hierarchical compression, and selective processing of visual information. Network-Symbolic representation, where both systematic structural / logical methods and neural / statistical methods are the parts of a single mechanism, is the most feasible for such models. It converts visual information into the relational Network-Symbolic structures, instead of precise computations of a 3-dimensional models. Narrow foveal vision provides separation of figure from ground, object identification, semantic analysis, and precise control of actions. Rough wide peripheral vision identifies and tracks salient motion, guiding foveal system to salient objects. It also provides scene context. Objects with rigid bodies and other stable systems have coherent relational structures. Hierarchical compression and Network-Symbolic transformations derive more abstract structures that allow invariably recognize a particular structure as an exemplar of class. Robotic systems equipped with such smart vision will be able effectively navigate in any environment, understand situation, and act accordingly.

  1. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  2. 3D panorama stereo visual perception centering on the observers

    NASA Astrophysics Data System (ADS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-09-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality.

  3. High Resolution Stereo Camera (HRSC) on Mars Express - a decade of PR/EO activities at Freie Universität Berlin

    NASA Astrophysics Data System (ADS)

    Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian

    2014-05-01

    Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC

  4. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  5. Asynchronous event-based binocular stereo matching.

    PubMed

    Rogister, Paul; Benosman, Ryad; Ieng, Sio-Hoi; Lichtsteiner, Patrick; Delbruck, Tobi

    2012-02-01

    We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events. PMID:24808513

  6. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  7. The STEREO Mission

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2005-01-01

    STEREO (Solar TErrestrial RElations Observatory) will launch in 2006 on a two-year mission to study Coronal Mass Ejections (CMEs) and the solar wind. The mission consists of two space-based observatories - one moving ahead of Earth in its orbit, the other trailing behind - to provide the first-ever stereoscopic measurements to study the Sun and the nature of CMEs. STEREO's scientific objectives are to: 1) Understand the causes and mechanisms of coronal mass ejection (CME) initiation; 2) Characterize the propagation of CMEs through the heliosphere; 3) Discover the mechanisms and sites of energetic particle acceleration in the low corona and the interplanetary medium; 4) Improve the determination of the structure of the ambient solar wind. Additional information is included in the original extended abstract.

  8. Northern Sinus Meridiani Stereo

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-341, 25 April 2003

    This is a stereo (3-d anaglyph) composite of Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle images of northern Sinus Meridiani near 2oN, 0oW. The light-toned materials at the south (bottom) end of the picture are considered to be thick (100-200 meters; 300-600 ft) exposures of sedimentary rock. Several ancient meteor impact craters are being exhumed from within these layered materials. To view in stereo, use '3-d' glasses with red over the left eye, and blue over the right. The picture covers an area approximately 113 km (70 mi) wide; north is up.

  9. Quad stereo-microscopy

    NASA Astrophysics Data System (ADS)

    Hay, Rebecca F.; Gibson, Graham M.; Lee, Michael P.; Padgett, Miles J.; Phillips, David B.

    2014-09-01

    Stereo-microscopy is a technique that enables a sample to be imaged from two directions simultaneously, allowing the tracking of microscopic objects in three dimensions. This is achieved by illuminating the sample from different directions, each illumination direction producing an individual image. These images are superimposed in the image plane but can be easily separated using a diffractive optical element in the Fourier plane of the imaging arm. Therefore this enables 3-dimensional coordinates to be reconstructed using simple 2-dimensional image tracking and parallax. This is a powerful technique when combined with holographic optical tweezers (HOT), where multiple objects can be trapped and tracked simultaneously in three dimensions. In this work, we extend this concept to four different illumination directions: quad stereo-microscopy. This allows us to measure the accuracy of tracking in three dimensions, and to optimise the system.

  10. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  11. Usability of car stereo.

    PubMed

    Razza, Bruno Montanari; Paschoarelli, Luis Carlos

    2012-01-01

    Automotive sound systems vary widely in terms of functions and way of use between different brands and models what can bring difficulties and lack of consistency to the user. This study aimed to analyze the usability of car stereo commonly found in the market. Four products were analyzed by task analysis and after use reports and the results indicate serious usability issues with respect to the form of operation, organization, clarity and quality of information, visibility and readability, among others. PMID:22317617

  12. STEREO Model Rheometry

    NASA Astrophysics Data System (ADS)

    Pulupa, M.; Bale, S. D.

    2006-12-01

    The SWAVES instrument on STEREO consists of three orthogonal six meter monopole antennae mounted on each spacecraft. The environment of a spacecraft-borne antenna contains many parasitic conductors, such as the spacecraft chassis, solar panels, and other instruments. The presence of these conductors results in an effective electrical antenna length that is different in magnitude and direction from that of the physical antenna. In order to determine the true effective length of the SWAVES antennae, it is necessary to make independent measurements of measured voltages using known electric fields. The scale model rheometry method as used by Rucker et al. for the Cassini RPWS instrument offers a means of measuring effective antennae vectors. The antenna measurements are made on a model spacecraft suspended in an electrolytic tank, and then scaled up to the full size spacecraft. We have constructed both a rheometry apparatus and a 1:20 scale model of the STEREO spacecraft, which was gold plated to ensure good surface conductivity. We will demonstrate the validity of the procedure and present the results of the STEREO model measurements.

  13. Stereo Matching by Filtering-Based Disparity Propagation.

    PubMed

    Wang, Xingzheng; Tian, Yushi; Wang, Haoqian; Zhang, Yongbing

    2016-01-01

    Stereo matching is essential and fundamental in computer vision tasks. In this paper, a novel stereo matching algorithm based on disparity propagation using edge-aware filtering is proposed. By extracting disparity subsets for reliable points and customizing the cost volume, the initial disparity map is refined through filtering-based disparity propagation. Then, an edge-aware filter with low computational complexity is adopted to formulate the cost column, which makes the proposed method independent on the local window size. Experimental results demonstrate the effectiveness of the proposed scheme. Bad pixels in our output disparity map are considerably decreased. The proposed method greatly outperforms the adaptive support-weight approach and other conditional window-based local stereo matching algorithms. PMID:27626800

  14. Stereo Imaging Miniature Endoscope

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Manohara, Harish; White, Victor; Shcheglov, Kirill V.; Shahinian, Hrayr

    2011-01-01

    Stereo imaging requires two different perspectives of the same object and, traditionally, a pair of side-by-side cameras would be used but are not feasible for something as tiny as a less than 4-mm-diameter endoscope that could be used for minimally invasive surgeries or geoexploration through tiny fissures or bores. The proposed solution here is to employ a single lens, and a pair of conjugated, multiple-bandpass filters (CMBFs) to separate stereo images. When a CMBF is placed in front of each of the stereo channels, only one wavelength of the visible spectrum that falls within the passbands of the CMBF is transmitted through at a time when illuminated. Because the passbands are conjugated, only one of the two channels will see a particular wavelength. These time-multiplexed images are then mixed and reconstructed to display as stereo images. The basic principle of stereo imaging involves an object that is illuminated at specific wavelengths, and a range of illumination wavelengths is time multiplexed. The light reflected from the object selectively passes through one of the two CMBFs integrated with two pupils separated by a baseline distance, and is focused onto the imaging plane through an objective lens. The passband range of CMBFs and the illumination wavelengths are synchronized such that each of the CMBFs allows transmission of only the alternate illumination wavelength bands. And the transmission bandwidths of CMBFs are complementary to each other, so that when one transmits, the other one blocks. This can be clearly understood if the wavelength bands are divided broadly into red, green, and blue, then the illumination wavelengths contain two bands in red (R1, R2), two bands in green (G1, G2), and two bands in blue (B1, B2). Therefore, when the objective is illuminated by R1, the reflected light enters through only the left-CMBF as the R1 band corresponds to the transmission window of the left CMBF at the left pupil. This is blocked by the right CMBF. The

  15. STEREO - The Sun from Two Points of View

    NASA Technical Reports Server (NTRS)

    Kucera, Therese A.

    2010-01-01

    NASA's STEREO (Solar TErrestrial RElations Observatory) mission continues its investigations into the three dimensional structure of the sun and heliosphere. With the recent increases in solar activity STEREO is yielding new results obtained using the mission's full array of imaging and in-situ instrumentation, and in February 2011 the two spacecraft will be 180 degrees apart allowing us to directly image the entire solar disk for the first time. We will discuss the latest results from STEREO and how they change our view of solar activity and its effects on our solar system.

  16. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  17. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  18. Opportunity's View, Sol 958 (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01897

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01897

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo view of the rover's surroundings on the 958th sol, or Martian day, of its surface mission (Oct. 4, 2006).

    This view is presented as a cylindrical-perspective projection with geometric seam correction. The image appears three-dimensional when viewed through red-green stereo glasses.

  19. Intelligent robots and computer vision IX: Algorithms and techniques; Proceedings of the Meeting, Boston, MA, Nov. 5-7, 1990

    SciTech Connect

    Casasent, D.P. )

    1991-01-01

    The newest research results, trends, and developments in intelligent robots and computer vision considers topics in pattern recognition for computer vision, image processing, intelligent material handling and vision, novel preprocessing algorithms and hardware, technology for support of intelligent robots and automated systems, fuzzy logic in intelligent systems and computer vision, and segmentation techniques. Attention is given to production quality control problems, recognition in face space, automatic vehicle model identification, active stereo inspection using computer solids models, use of coordinate mapping as a method for image data reduction, integration of a computer vision system with an IBM 7535 robot, fuzzy logic controller structures, supervised pixel classification using a feature space derived from an artificial visual system, and multiresolution segmentation of forward-looking IR and SAR imagery using neural networks.

  20. Stereo imaging in astronomy with ultralong baseline interfereometry

    NASA Astrophysics Data System (ADS)

    Ray, Alak

    2015-08-01

    Astronomical images recorded on two-dimensional detectors do not give depth information even for extended objects. Three-dimensional (3D) reconstruction of such objects, e.g. supernova remnants (SNRs) is based on Doppler velocity measurements across the image assuming a position-velocity correspondence about the explosion center. Stereo imaging of astronomical objects, when possible, directly yield, independently of this assumption, 3D structures that will advance our understanding of their evolution and origins, and allow comparison with model simulations. The large distance to astronomical objects and the relatively small attainable stereo baselines make two views of the scene (the stereo image pair) differing by a very small angle and require very high-resolution imaging. Interferometry in the radio, mm, and shorter wavelengths will be required with interplanetary baselines to match these requirements. Using the earth's orbital diameter as the stereo base for images constructed six months apart, as in parallax measurements, through very high resolution telescope arrays may achieve these goals. Apart from challenges of space based interferometry and refractive variations of the intervening medium, issues of camera calibration, triangulation in the presence of realistic noise, image texture recognition and enhancement that are commonly faced in the field of Computer Vision have to be successfully addressed for stereo imaging in astronomy.

  1. Hearing symptoms personal stereos

    PubMed Central

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-01-01

    Summary Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young. PMID:25991931

  2. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma.

    PubMed

    Murphy, Matthew C; Conner, Ian P; Teng, Cindy Y; Lawrence, Jesse D; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S; Chan, Kevin C

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  3. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  4. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  5. Forward-looking activities: incorporating citizens' visions: A critical analysis of the CIVISTI method.

    PubMed

    Gudowsky, Niklas; Peissl, Walter; Sotoudeh, Mahshid; Bechtold, Ulrike

    2012-11-01

    Looking back on the many prophets who tried to predict the future as if it were predetermined, at first sight any forward-looking activity is reminiscent of making predictions with a crystal ball. In contrast to fortune tellers, today's exercises do not predict, but try to show different paths that an open future could take. A key motivation to undertake forward-looking activities is broadening the information basis for decision-makers to help them actively shape the future in a desired way. Experts, laypeople, or stakeholders may have different sets of values and priorities with regard to pending decisions on any issue related to the future. Therefore, considering and incorporating their views can, in the best case scenario, lead to more robust decisions and strategies. However, transferring this plurality into a form that decision-makers can consider is a challenge in terms of both design and facilitation of participatory processes. In this paper, we will introduce and critically assess a new qualitative method for forward-looking activities, namely CIVISTI (Citizen Visions on Science, Technology and Innovation; www.civisti.org), which was developed during an EU project of the same name. Focussing strongly on participation, with clear roles for citizens and experts, the method combines expert, stakeholder and lay knowledge to elaborate recommendations for decision-making in issues related to today's and tomorrow's science, technology and innovation. Consisting of three steps, the process starts with citizens' visions of a future 30-40 years from now. Experts then translate these visions into practical recommendations which the same citizens then validate and prioritise to produce a final product. The following paper will highlight the added value as well as limits of the CIVISTI method and will illustrate potential for the improvement of future processes. PMID:23204998

  6. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.

    1982-01-01

    The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.

  7. First Three-Dimensional Reconstructions of Coronal Loops with the STEREO A+B Spacecraft. III. Instant Stereoscopic Tomography of Active Regions

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.; Wuelser, Jean-Pierre; Nitta, Nariaki V.; Lemen, James R.; Sandman, Anne

    2009-04-01

    Here we develop a novel three-dimensional (3D) reconstruction method of the coronal plasma of an active region by combining stereoscopic triangulation of loops with density and temperature modeling of coronal loops with a filling factor equivalent to tomographic volume rendering. Because this method requires only a stereoscopic image pair in multiple temperature filters, which are sampled within ≈1 minute with the recent STEREO/EUVI instrument, this method is about four orders of magnitude faster than conventional solar rotation-based tomography. We reconstruct the 3D density and temperature distribution of active region NOAA 10955 by stereoscopic triangulation of 70 loops, which are used as a skeleton for a 3D field interpolation of some 7000 loop components, leading to a 3D model that reproduces the observed fluxes in each stereoscopic image pair with an accuracy of a few percents (of the average flux) in each pixel. With the stereoscopic tomography we infer also a differential emission measure distribution over the entire temperature range of T ≈ 104-107, with predictions for the transition region and hotter corona in soft X-rays. The tomographic 3D model provides also large statistics of physical parameters. We find that the extreme-ultraviolet loops with apex temperatures of Tm lsim 3.0 MK tend to be super-hydrostatic, while hotter loops with Tm ≈ 4-7 MK are near-hydrostatic. The new 3D reconstruction model is fully independent of any magnetic field data and is promising for future tests of theoretical magnetic field models and coronal heating models.

  8. FIRST THREE-DIMENSIONAL RECONSTRUCTIONS OF CORONAL LOOPS WITH THE STEREO A+B SPACECRAFT. III. INSTANT STEREOSCOPIC TOMOGRAPHY OF ACTIVE REGIONS

    SciTech Connect

    Aschwanden, Markus J.; Wuelser, Jean-Pierre; Nitta, Nariaki V.; Lemen, James R.; Sandman, Anne

    2009-04-10

    Here we develop a novel three-dimensional (3D) reconstruction method of the coronal plasma of an active region by combining stereoscopic triangulation of loops with density and temperature modeling of coronal loops with a filling factor equivalent to tomographic volume rendering. Because this method requires only a stereoscopic image pair in multiple temperature filters, which are sampled within {approx}1 minute with the recent STEREO/EUVI instrument, this method is about four orders of magnitude faster than conventional solar rotation-based tomography. We reconstruct the 3D density and temperature distribution of active region NOAA 10955 by stereoscopic triangulation of 70 loops, which are used as a skeleton for a 3D field interpolation of some 7000 loop components, leading to a 3D model that reproduces the observed fluxes in each stereoscopic image pair with an accuracy of a few percents (of the average flux) in each pixel. With the stereoscopic tomography we infer also a differential emission measure distribution over the entire temperature range of T {approx} 10{sup 4}-10{sup 7}, with predictions for the transition region and hotter corona in soft X-rays. The tomographic 3D model provides also large statistics of physical parameters. We find that the extreme-ultraviolet loops with apex temperatures of T{sub m} {approx}< 3.0 MK tend to be super-hydrostatic, while hotter loops with T{sub m} {approx} 4-7 MK are near-hydrostatic. The new 3D reconstruction model is fully independent of any magnetic field data and is promising for future tests of theoretical magnetic field models and coronal heating models.

  9. What is stereoscopic vision good for?

    NASA Astrophysics Data System (ADS)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  10. Subpixel photometric stereo.

    PubMed

    Tan, Ping; Lin, Stephen; Quan, Long

    2008-08-01

    Conventional photometric stereo recovers one normal direction per pixel of the input image. This fundamentally limits the scale of recovered geometry to the resolution of the input image, and cannot model surfaces with subpixel geometric structures. In this paper, we propose a method to recover subpixel surface geometry by studying the relationship between the subpixel geometry and the reflectance properties of a surface. We first describe a generalized physically-based reflectance model that relates the distribution of surface normals inside each pixel area to its reflectance function. The distribution of surface normals can be computed from the reflectance functions recorded in photometric stereo images. A convexity measure of subpixel geometry structure is also recovered at each pixel, through an analysis of the shadowing attenuation. Then, we use the recovered distribution of surface normals and the surface convexity to infer subpixel geometric structures on a surface of homogeneous material by spatially arranging the normals among pixels at a higher resolution than that of the input image. Finally, we optimize the arrangement of normals using a combination of belief propagation and MCMC based on a minimum description length criterion on 3D textons over the surface. The experiments demonstrate the validity of our approach and show superior geometric resolution for the recovered surfaces. PMID:18566498

  11. Advances On Integration Between Stereo Sparse Data And Orientation Map

    NASA Astrophysics Data System (ADS)

    Caponetti, Laura; Chiaradia, Maria T.; Distante, Arcangelo; Mugnuolo, Raffaele; Stella, Ettore

    1990-03-01

    During last years, Computer Vision has developed algorithms for most of early vision processes. It is a common idea that each vision process seaparatly cannot supply a reliable descritpion of the scene. In fact, one of the keys in reliability and robustness of biological systems is their ability to integrate information from different early processes. The base concept of our vision system is to integrate information from stereo and shading (Fig.1). The results obtained from this scheme in previous works are very interesting and suggest us to continue on this methodology. In the first work 1.2 the base approach to integration scheme was presented. The work deals on general concepts and main evolutions on shading analysis, in terms of analysis simplifications and improved accuracy. The scheme was tested on both synthetical and real scenes.

  12. Stereoscopic depth perception for robot vision: algorithms and architectures

    SciTech Connect

    Safranek, R.J.; Kak, A.C.

    1983-01-01

    The implementation of depth perception algorithms for computer vision is considered. In automated manufacturing, depth information is vital for tasks such as path planning and 3-d scene analysis. The presentation begins with a survey of computer algorithms for stereoscopic depth perception. The emphasis is on the Marr-Poggio paradigm of human stereo vision and its computer implementation. In addition, a stereo matching algorithm based on the relaxation labelling technique is examined. A computer architecture designed to efficiently implement stereo matching algorithms, an MIMD array interfaced to a global memory, is presented. 9 references.

  13. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch. PMID:27233286

  14. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  15. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  16. Versatile transformations of hydrocarbons in anaerobic bacteria: substrate ranges and regio- and stereo-chemistry of activation reactions†

    PubMed Central

    Jarling, René; Kühner, Simon; Basílio Janke, Eline; Gruner, Andrea; Drozdowska, Marta; Golding, Bernard T.; Rabus, Ralf; Wilkes, Heinz

    2015-01-01

    Anaerobic metabolism of hydrocarbons proceeds either via addition to fumarate or by hydroxylation in various microorganisms, e.g., sulfate-reducing or denitrifying bacteria, which are specialized in utilizing n-alkanes or alkylbenzenes as growth substrates. General pathways for carbon assimilation and energy gain have been elucidated for a limited number of possible substrates. In this work the metabolic activity of 11 bacterial strains during anaerobic growth with crude oil was investigated and compared with the metabolite patterns appearing during anaerobic growth with more than 40 different hydrocarbons supplied as binary mixtures. We show that the range of co-metabolically formed alkyl- and arylalkyl-succinates is much broader in n-alkane than in alkylbenzene utilizers. The structures and stereochemistry of these products are resolved. Furthermore, we demonstrate that anaerobic hydroxylation of alkylbenzenes does not only occur in denitrifiers but also in sulfate reducers. We propose that these processes play a role in detoxification under conditions of solvent stress. The thermophilic sulfate-reducing strain TD3 is shown to produce n-alkylsuccinates, which are suggested not to derive from terminal activation of n-alkanes, but rather to represent intermediates of a metabolic pathway short-cutting fumarate regeneration by reverse action of succinate synthase. The outcomes of this study provide a basis for geochemically tracing such processes in natural habitats and contribute to an improved understanding of microbial activity in hydrocarbon-rich anoxic environments. PMID:26441848

  17. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    PubMed

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections. PMID:27232199

  18. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  19. Recognition of Activities of Daily Living with Egocentric Vision: A Review.

    PubMed

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  20. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  1. Opportunity's View, Sol 959, (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01893

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01893

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo view of the rover's surroundings on sol (or Martian day) 959 of its surface mission.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  2. Recent STEREO Observations of Coronal Mass Ejections

    NASA Technical Reports Server (NTRS)

    SaintCyr, Chris Orville; Xie, Hong; Mays, Mona Leila; Davila, Joseph M.; Gilbert, Holly R.; Jones, Shaela I.; Pesnell, William Dean; Gopalswamy, Nat; Gurman, Joseph B.; Yashiro, Seiji; Wuelser, Jean-Pierre; Howard, Russell A.; Thompson, Barbara J.; Thompson, William T.

    2008-01-01

    Over 400 CMEs have been observed by STEREO SECCHI COR1 during the mission's three year duration (2006-2009). Many of the solar activity indicators have been at minimal values over this period, and the Carrington rotation-averaged CME rate has been comparable to that measured during the minima between Cycle 21-22 (SMM C/P) and Cycle 22-23 (SOHO LASCO). That rate is about 0.5 CMEs/day. During the current solar minimum (leading to Cycle 24), there have been entire Carrington rotations where no sunspots were detected and the daily values of the 2800 MHz solar flux remained below 70 sfu. CMEs continued to be detected during these exceptionally quiet periods, indicating that active regions are not necessary to the generation of at least a portion of the CME population. In the past, researchers were limited to a single view of the Sun and could conclude that activity on the unseen portion of the disk might be associated with CMEs. But as the STEREO mission has progressed we have been able to observe an increasing fraction of the Sun's corona with STEREO SECCHI EUVI and were able to eliminate this possibility. Here we report on the nature of CMEs detected during these exceptionally quiet periods, and we speculate on how the corona remains dynamic during such conditions.

  3. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  4. #3 STEREO - Approaching 360 Degrees

    NASA Video Gallery

    As the STEREO spacecraft have moved out on either side of Earth they have imaged more and more of the Sun's surface. This video shows how our coverage of the Sun has increased. The Sun is shown as ...

  5. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  6. Solar Terrestrial Relations Observatory (STEREO)

    NASA Astrophysics Data System (ADS)

    Davila, Joseph M.; Rust, David M.; Pizzo, Victor J.; Liewer, Paulett C.

    1996-11-01

    The solar output changes on a variety of timescales, from minutes, to years, to tens of years and even to hundreds of years. The dominant timescale of variation is, of course, the 11-year solar cycle. Observational evidence shows that the physics of solar output variation is strongly tied to changes in the magnetic field, and perhaps the most dramatic manifestation of a constantly changing magnetic field is the Coronal Mass Ejection (CME). On August 5 - 6, 1996 the Second Workshop to discuss missions to observe these phenomena from new vantage points, organized by the authors, was held in Boulder, Colorado at the NOAA Space Environmental Center. The workshop was attended by approximately 20 scientists representing 13 institutions from the United States and Europe. The purpose of the Workshop was to discuss the different concepts for multi- spacecraft observation of the Sun which have been proposed, to develop a list of scientific objectives, and to arrive at a consensus description of a mission to observe the Sun from new vantage points. The fundamental goal of STEREO is to discover how coronal mass ejections start at the Sun and propagate in interplanetary space. The workshop started with the propositions that coronal mass ejections are fundamental manifestations of rapid large-scale change in the global magnetic structure of the Sun, that CME's are a major driver of coronal evolution, and that they may play a major role in the solar dynamo. Workshop participants developed a mission concept that will lead to a comprehensive characterization of CME disturbances through build-up, initiation, launch, and propagation to Earth. It will also build a clear picture of long-term evolution of the corona. Participants in the workshop recommended that STEREO be a joint mission with the European scientific community and that it consist of four spacecraft: `East' at 1 AU near L4, 60 deg from EArth to detect active regions 5 days before they can be seen by terrestrial telescopes

  7. The zone of comfort: Predicting visual discomfort with stereo displays

    PubMed Central

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  8. Stereo matching based on census transformation of image gradients

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Karras, G.; Petsa, E.

    2015-05-01

    Although multiple-view matching provides certain significant advantages regarding accuracy, occlusion handling and radiometric fidelity, stereo-matching remains indispensable for a variety of applications; these involve cases when image acquisition requires fixed geometry and limited number of images or speed. Such instances include robotics, autonomous navigation, reconstruction from a limited number of aerial/satellite images, industrial inspection and augmented reality through smart-phones. As a consequence, stereo-matching is a continuously evolving research field with growing variety of applicable scenarios. In this work a novel multi-purpose cost for stereo-matching is proposed, based on census transformation on image gradients and evaluated within a local matching scheme. It is demonstrated that when the census transformation is applied on gradients the invariance of the cost function to changes in illumination (non-linear) is significantly strengthened. The calculated cost values are aggregated through adaptive support regions, based both on cross-skeletons and basic rectangular windows. The matching algorithm is tuned for the parameters in each case. The described matching cost has been evaluated on the Middlebury stereo-vision 2006 datasets, which include changes in illumination and exposure. The tests verify that the census transformation on image gradients indeed results in a more robust cost function, regardless of aggregation strategy.

  9. Using fuzzy logic to enhance stereo matching in multiresolution images.

    PubMed

    Medeiros, Marcos D; Gonçalves, Luiz Marcos G; Frery, Alejandro C

    2010-01-01

    Stereo matching is an open problem in computer vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  10. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    PubMed Central

    Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.

    2010-01-01

    Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  11. Vision-based localization in urban environments

    NASA Astrophysics Data System (ADS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-05-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory has developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by the stereo pair. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations. For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of three primary components. The first is a stereo-based visual odometry system that calculates the 6-degree of freedom camera motion between sequential frames. The second component uses a set of heuristics to identify straight-line segments that are likely to be part of a building exterior. Ranging to these straight-line features is computed using binocular or wide-baseline stereo. The resulting features and the associated range measurements are fed to the third software component, a particle-filter based localization system. This system uses the map and the most recent results from the first two to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and describes the results of applying the system to the global localization of a camera system over an approximately half-kilometer traverse across JPL

  12. Design of secondary optics for IRED in active night vision systems.

    PubMed

    Xin, Di; Liu, Hua; Jing, Lei; Wang, Yao; Xu, Wenbin; Lu, Zhenwu

    2013-01-14

    An effective optical design method is proposed to solve the problem of adjustable view angle for infrared illuminator in active night vision systems. A novel total internal reflection (TIR) lens with three segments of the side surface is designed as the secondary optics of infrared emitting diode (IRED). It can provide three modes with different view angles to achieve a complete coverage of the monitored area. As an example, a novel TIR lens is designed for SONY FCB-EX 480CP camera. Optical performance of the novel TIR lens is investigated by both numerical simulation and experiments. The results demonstrate that it can meet the requirements of different irradiation distances quit well with view angles of 7.5°, 22° and 50°. The mean optical efficiency is improved from 62% to 75% and the mean irradiance uniformity is improved from 65% to 85% compared with the traditional structure. PMID:23389004

  13. Analysis and design of stereoscopic display in stereo television endoscope system

    NASA Astrophysics Data System (ADS)

    Feng, Dawei

    2008-12-01

    Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.

  14. Photometric stereo endoscopy

    PubMed Central

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.

    2013-01-01

    Abstract. While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging. PMID:23864015

  15. Detail-Preserving and Content-Aware Variational Multi-View Stereo Reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Zhaoxin; Wang, Kuanquan; Zuo, Wangmeng; Meng, Deyu; Zhang, Lei

    2016-02-01

    Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo reconstruction, most existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with few textures. To address these limitations, this paper presents a Detail-preserving and Content-aware Variational (DCV) multi-view stereo method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware $\\ell_{p}$-minimization algorithm by adaptively estimating the $p$ value and regularization parameters based on the current input. It is much more promising in suppressing noise while preserving sharp features than conventional isotropic mesh smoothing. Experimental results on benchmark datasets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse ring datasets in terms of both completeness and accuracy.

  16. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  17. A binocular stereo approach to AR/C at the Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Smith, Alan T.

    1991-01-01

    Automated Rendezvous and Capture requires the determination of the 6 DOF relating two free bodies. Sensor systems that can provide such information have varying sizes, weights, power requirements, complexities, and accuracies. One type of sensor system that can provide several key advantages is a binocular stereo vision system.

  18. Phobos in Stereo

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter took two images of the larger of Mars' two moons, Phobos, within 10 minutes of each other on March 23, 2008. This view combines the two images. Because the two were taken at slightly different viewing angles, this provides a three-dimensional effect when seen through red-blue glasses (red on left eye).

    The illuminated part of Phobos seen here is about 21 kilometers (13 miles) across. The most prominent feature is the large crater Stickney at the bottom of the image. With a diameter of 9 kilometers (5.6 miles), it is the largest feature on Phobos. A series of troughs and crater chains is obvious on other parts of the moon. Although many appear radial to Stickney in this image, recent studies from the European Space Agency's Mars Express orbiter indicate that they are not related to Stickney. Instead, they may have formed when material ejected from impacts on Mars later collided with Phobos. The lineated textures on the walls of Stickney and other large craters are landslides formed from materials falling into the crater interiors in the weak Phobos gravity (less than one one-thousandth of the gravity on Earth).

    This stereo view combines images in the HiRISE catalog as PSP_007769_9010 (in red here) and PSP_007769_9015 (in blue here).

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo.

  19. Vision problems

    MedlinePlus

    ... which nothing can be seen) Vision loss and blindness are the most severe vision problems. Causes Vision ... that look faded. The most common cause of blindness in people over age 60. Eye infection, inflammation, ...

  20. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  1. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  2. Stereo matching: performance study of two global algorithms

    NASA Astrophysics Data System (ADS)

    Arunagiri, Sarala; Jordan, Victor J.; Teller, Patricia J.; Deroba, Joseph C.; Shires, Dale R.; Park, Song J.; Nguyen, Lam H.

    2011-06-01

    Techniques such as clinometry, stereoscopy, interferometry, and polarimetry are used for Digital Elevation Model (DEM) generation from Synthetic Aperture Radar (SAR) images. The choice of technique depends on the SAR configuration, the means used for image acquisition, and the relief type. The most popular techniques are interferometry for regions of high coherence and stereoscopy for regions such as steep forested mountain slopes. Stereo matching, which is finds the disparity map or correspondence points between two images acquired from different sensor positions, is a core process in stereoscopy. Additionally, automatic stereo processing, which involves stereo matching, is an important process in other applications including vision-based obstacle avoidance for unmanned air vehicles (UAVs), extraction of weak targets in clutter, and automatic target detection. Due to its high computational complexity, stereo matching has traditionally been, and continues to be, one of the most heavily investigated topics in computer vision. A stereo matching algorithm performs a subset of the following four steps: cost computation, cost (support) aggregation, disparity computation/optimization, and disparity refinement. Based on the method used for cost computation, the algorithms are classified into feature-, phase-, and area-based algorithms; and they are classified as local or global based on how they perform disparity computation/optimization. We present a comparative performance study of two pairs, i.e., four versions, of global stereo matching codes. Each pair uses a different minimization technique: a simulated annealing or graph cut algorithm. And, the codes of a pair differ in terms of the employed global cost function: absolute difference (AD) or a variation of normalized cross correlation (NCC). The performance comparison is in terms of execution time, the global minimum cost achieved, power and energy consumption, and the quality of generated output. The results of

  3. Photometric invariant stereo matching method.

    PubMed

    Gu, Feifei; Zhao, Hong; Zhou, Xiang; Li, Jinjun; Bu, Penghui; Zhao, Zixin

    2015-12-14

    A robust stereo matching method based on a comprehensive mathematical model for color formation process is proposed to estimate the disparity map of stereo images with noise and photometric variations. The band-pass filter with DoP kernel is firstly used to filter out noise component of the stereo images. Then the log-chromaticity normalization process is applied to eliminate the influence of lightning geometry. All the other factors that may influence the color formation process are removed through the disparity estimation process with a specific matching cost. Performance of the developed method is evaluated by comparing with some up-to-date algorithms. Experimental results are presented to demonstrate the robustness and accuracy of the method. PMID:26698970

  4. Stereo pairs from linear morphing

    NASA Astrophysics Data System (ADS)

    McAllister, David F.

    1998-04-01

    Several authors have recently investigated the ability to compute intermediate views of a scene using given 2D images from arbitrary camera positions. The methods fall under the topic of image based rendering. In the case we give here, linear morphing between two parallel views of a scene produces intermediate views that would have been produced by parallel movement of a camera. Hence, the technique produces images computed in a way that is consistent with the standard off-axis perspective projection method for computing stereo pairs. Using available commercial 2D morphing software, linear morphing can be used to produce stereo pairs from a single image with bilateral symmetry such as a human face. In our case, the second image is produced by horizontal reflection. We describe morphing and show how it can be used to provide stereo pairs from single images.

  5. #1 Stereo Orbit - Launch to Feb 2011

    NASA Video Gallery

    The STEREO mission consists of two spacecraft orbiting the Sun, one moving a bit faster than Earth and the other a bit slower. In the time since the STEREO spacecraft entered these orbits near the ...

  6. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vetrone, A. V.; Martin, M. D.

    1980-01-01

    The extremely long missions of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of which can be used to form stereo images allowing the earth-bound student of Mars to examine the subject in 3-D. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set. Since that data set is still growing (January, 1980, about 3 1/2 years after the mission began), a second edition of this catalog is planned with completion expected about November, 1980.

  7. Optical stereo video signal processor

    NASA Astrophysics Data System (ADS)

    Craig, G. D.

    1985-12-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  8. Real time swallowing measurement system by using photometric stereo

    NASA Astrophysics Data System (ADS)

    Fujino, Masahiro; Kato, Kunihito; Mura, Emi; Nagai, Hajime

    2015-04-01

    In this paper, we propose a measurement system to evaluate the swallowing by estimating the movement of the thyroid cartilage. We developed a measurement system based on the vision sensor in order to achieve the noncontact and non-invasive sensor. The movement of the subject's thyroid cartilage is tracked by the three dimensional information of the surface of the skin measured by the photometric stereo. We constructed a camera system that uses near-IR light sources and three camera sensors. We conformed the effectiveness of the proposed system by experiments.

  9. Three-dimensional digitizing of objects using stereo videography

    NASA Astrophysics Data System (ADS)

    Haggren, Henrik G. A.; Jokinen, Olli T.; Niini, Ilkka; Pontinen, Petteri

    1994-03-01

    We discuss an off-line 3D measuring procedure under development in a project for the national program on machine vision in Finland 1992-1996. The procedure consists of four concurrent phases: (1) recording, (2) rectification, (3) digitizing, and (4) modeling. The recordings are based on sequential stereo videography, and the continuous object digitizing is based on the rigid normal case of stereography. The system modules are as follows: (1) two video cameras, (2) controllable feature projector, and (3) photogrammetric station. The procedure is interactively linked to a CAD/CAM-environment both for reverse engineering and quality control use.

  10. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  11. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  12. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  13. Overview of passive and active vision techniques for hand-held 3D data acquistion

    NASA Astrophysics Data System (ADS)

    Mada, Sreenivasa K.; Smith, Melvyn L.; Smith, Lyndon N.; Midha, Prema S.

    2003-03-01

    The digitization of the 3D shape of real objects is a rapidly expanding discipline, with a wide variety of applications, including shape acquisition, inspection, reverse engineering, gauging and robot navigation. Developments in computer product design techniques, automated production, and the need for close manufacturing tolerances will be facts of life for the foreseeable future. A growing need exists for fast, accurate, portable, non-contact 3D sensors. However, in order for 3D scanning to become more commonplace, new methods are needed for easily, quickly and robustly acquiring accurate full geometric models of complex objects using low cost technology. In this paper, a brief survey is presented of current scanning technologies available for acquiring range data. An overview is provided of current 3D-shape acquisition using both active and passive vision techniques. Each technique is explained in terms of its configuration, principle of operation, and the inherent advantages and limitations. A separate section then focuses on the implications of scannerless scanning for hand held technology, after which the current status of 3D acquisition using handheld technology, together with related issues concerning implementation, is considered more fully. Finally, conclusions for further developments in handheld devices are discussed. This paper may be of particular benefit to new comers in this field.

  14. Solving the interface problem for Windows stereo applications

    NASA Astrophysics Data System (ADS)

    Halnon, Jeff; Milici, Dave

    1998-04-01

    The most common type of electronic stereoscopic viewing devices available are LC (Liquid Crystal) shutter glasses, such as CrystalEyes made by StereoGraphics Corp. These type of stereo glasses work by alternating each eye's shutter in sync with a left or right display field. In order to support this technology on PCs, StereoGraphics has been actively working with hardware display vendors, software developers, and VESA (Video Electronic Standards Association) to establish standard stereoscopic display interfaces. With Microsoft licensing OpenGL for Windows NT systems and developing their own DirectX software architecture for Windows 9x, a variety of 3D accelerator boards are now available with 3D rendering capabilities which were previously only available on proprietary graphics workstations. Some of these graphics controllers contain stereoscopic display support for automatic page-flipping of left/right images. The paper describes low-level stereoscopic display support included in VESA BIOS Extension Version 3 (VBE 3.0), the VESA standard stereoscopic interface connector, the GL_STEREO quad buffer model specified in OpenGL v1.1, and a proposal of a FlipStereo() API extension to Microsoft DirectX specification.

  15. STEREO-IMPACT E/PO: Getting Ready for Launch!

    NASA Astrophysics Data System (ADS)

    Mendez, B. J.; Peticolas, L. M.; Craig, N.

    2005-12-01

    The Solar Terrestrial Relations Observatory (STEREO) is scheduled for launch in April/May 2006. STEREO will study the Sun with two spacecraft on either side of Earth in orbit around the Sun. The primary science goal is to understand the nature of Coronal Mass Ejections (CMEs). The E/PO program for the IMPACT suite of instruments aboard the two crafts is planning several activities leading up to launch to raise awareness and interest in the mission and its scientific discoveries. We will be participating in NASA's Sun-Earth day events which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. We will be conducting teacher workshops locally in California and also at the annual conference of the National Science Teachers Association. At these workshops, we will focus on the basics of magnetism and then its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. In addition, we will be working with NASA's Public Relations office to ensure that STEREO E/PO programs are highlighted in press releases about the mission.

  16. Stereo Pair, Honolulu, Oahu

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Honolulu, on the island of Oahu, is a large and growing urban area. This stereoscopic image pair, combining a Landsat image with topography measured by the Shuttle Radar Topography Mission (SRTM), shows how topography controls the urban pattern. This color image can be viewed in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the image pair, and viewing them with a stereoscope.

    Features of interest in this scene include Diamond Head (an extinct volcano near the bottom of the image), Waikiki Beach (just above Diamond Head), the Punchbowl National Cemetary (another extinct volcano, near the image center), downtown Honolulu and Honolulu harbor (image left-center), and offshore reef patterns. The slopes of the Koolau mountain range are seen in the right half of the image. Clouds commonly hang above ridges and peaks of the Hawaiian Islands, but in this synthesized stereo rendition appear draped directly on the mountains. The clouds are actually about 1000 meters (3300 feet) above sea level.

    This stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with a Landsat 7 Thematic Mapper image collected at the same time as the SRTM flight. The topography data were used to create two differing perspectives, one for each eye. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions. The United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota, provided the Landsat data.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three-dimensional measurements of the

  17. Monoaural-Stereo Recording Comparison.

    ERIC Educational Resources Information Center

    Legum, Stanley E.

    Six groups of third-grade boys--three predominantly black, three white--were tested to explore three questions: whether visibility or proximity of microphones affects speech production; whether stereo recordings made from desk or wall-mounted microphones are as usable for linguistic analysis as monoaural recordings made from lavaliere microphones;…

  18. Stereo side-looking radar experiments

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Raggam, J.; Kobrick, M.

    1980-01-01

    The application of side-looking radar images in geoscience fields can be enhanced when using overlapping image strips that are viewed in stereo. A question concerns the quality of stereo radar. This quality is described evaluating stereo viewability and using the concept of vertical exaggeration with sets of actual radar images. A conclusion is that currently available stereo radar data are not optimized, that therefore a better quality can be achieved if data acquisition is appropriately arranged, and that the actual limitations of stereo radar are still unexplored.

  19. Lambda Vision

    NASA Astrophysics Data System (ADS)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  20. Motion vision for mobile robots

    NASA Astrophysics Data System (ADS)

    Herrb, Matthieu

    The problem of using computer vision in mobile robots is dealt with. The datacube specialized cards and a parallel machine using a transputer network are studied. The tracking and localization of a three dimensional object in a sequence of images is examined, using first order prediction of the motion in the image plane and verification by a maximal clique search in the graph of mutually compatible matchings. A dynamic environment modeling module, using numerical fusion between trinocular stereovision and tracking of stereo matched primitives is presented. The integration of this perception system in the control architecture of a mobile robot is examined to achieve various functions, such as vision servo motion and environment modeling. The functional units implementing vision tasks and the data exchanged with other units are outlined. Experiments realized with the mobile robot Hilare 1.5 allowed the proposed algorithms and concepts to be validated.

  1. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  2. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity. PMID:17699922

  3. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  4. Exercise for People with Low Vision

    MedlinePlus

    ... Be a Partner Exercise for People with Low Vision People with low vision can be active in many ways! Before you ... your orientation. Learn more about living with low vision from the National Eye Institute on NIH . Find ...

  5. Parallel vision algorithms. Annual technical report No. 2, 1 October 1987-28 December 1988

    SciTech Connect

    Ibrahim, H.A.; Kender, J.R.; Brown, L.G.

    1989-01-01

    This Second Annual Technical Report covers the project activities during the period from October 1, 1987 through December 31, 1988. The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms can serve as front-end components of autonomous land-vehicle vision systems. During the second year of the project, efforts concentrated on the following: first, implementing and testing on the Connection Machine the parallel programming environment that will be used to develop, implement and test our parallel vision algorithms; second, implementing and testing primitives for the multi-resolution stereo and texture algorithms, in this environment. Also, efforts were continued to refine techniques used in the texture algorithms, and to develop a system that integrates information from several shape-from-texture methods. This report describes the status and progress of these efforts. The authors describe first the programming environment implementation, and how to use it. They summarize the results for multi-resolution based depth-interpolation algorithms on parallel architectures. Then, they present algorithms and test results for the texture algorithms. Finally, the results of the efforts of integrating information from various shape-from-texture algorithms are presented.

  6. Stereo 3D vision adapter using commercial DIY goods

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Ohara, Takashi

    2009-10-01

    The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.

  7. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  8. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  9. DETERMINATION OF EARLY STAGE CORN PLANT HEIGHT USING STEREO VISION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability to map crop height and changes in crop height over time in agricultural fields would be a useful diagnostic tool to identify where and when crop stress is occurring. Additionally, plant height or rate of plant height change could be used to evaluate spatial crop response to inputs of fe...

  10. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  11. Statistical Building Roof Reconstruction from WORLDVIEW-2 Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Huang, H.; Krauß, T.; Mayer, H.; Reinartz, P.

    2015-03-01

    3D building reconstruction from point clouds is an active research topic in remote sensing, photogrammetry and computer vision. Most of the prior research has been done on 3D building reconstruction from LiDAR data which means high resolution and dense data. The interest of this work is 3D building reconstruction from Digital Surface Models (DSM) of stereo image matching of space borne satellite data which cover larger areas than LiDAR datasets in one data acquisition step and can be used also for remote regions. The challenging problem is the noise of this data because of low resolution and matching errors. In this paper, a top-down and bottom-up method is developed to find building roof models which exhibit the optimum fit to the point clouds of the DSM. In the bottom up step of this hybrid method, the building mask and roof components such as ridge lines are extracted. In addition, in order to reduce the computational complexity and search space, roofs are classified to pitched and flat roofs as well. Ridge lines are utilized to estimate the roof primitives from a building library such as width, length, positions and orientation. Thereafter, a topdown approach based on Markov Chain Monte Carlo and simulated annealing is applied to optimize roof parameters in an iterative manner by stochastic sampling and minimizing the average of Euclidean distance between point cloud and model surface as fitness function. Experiments are performed on two areas of Munich city which include three roof types (hipped, gable and flat roofs). The results show the efficiency of this method in even for this type of noisy datasets.

  12. Viewing The Entire Sun With STEREO And SDO

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.

    2011-05-01

    On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.

  13. Optics, illumination, and image sensing for machine vision III; Proceedings of the Meeting, Cambridge, MA, Nov. 8, 9, 1988

    SciTech Connect

    Svetkoff, D.J.

    1989-01-01

    Various papers on optics, illumination, and image sensing for machine vision are presented. Some of the optics discussed include: illumination and imaging of moving objects, strobe illumination systems for machine vision, optical collision timer, new electrooptical coordinate measurement system, flexible and piezoresistive touch sensing array, selection of cameras for machine vision, custom fixed-focal length versus zoom lenses, performance of optimal phase-only filters, minimum variance SDF design using adaptive algorithms, Ho-Kashyap associative processors, component spaces for invariant pattern recognition, grid labeling using a marked grid, illumination-based model of stochastic textures, color-encoded moire contouring, noise measurement and suppression in active 3-D laser-based imaging systems, structural stereo matching of Laplacian-of-Gaussian contour segments for 3D perception, earth surface recovery from remotely sensed images, and shape from Lambertian photometric flow fields.

  14. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  15. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  16. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  17. Key characteristics of specular stereo

    PubMed Central

    Muryy, Alexander A.; Fleming, Roland W.; Welchman, Andrew E.

    2014-01-01

    Because specular reflection is view-dependent, shiny surfaces behave radically differently from matte, textured surfaces when viewed with two eyes. As a result, specular reflections pose substantial problems for binocular stereopsis. Here we use a combination of computer graphics and geometrical analysis to characterize the key respects in which specular stereo differs from standard stereo, to identify how and why the human visual system fails to reconstruct depths correctly from specular reflections. We describe rendering of stereoscopic images of specular surfaces in which the disparity information can be varied parametrically and independently of monocular appearance. Using the generated surfaces and images, we explain how stereo correspondence can be established with known and unknown surface geometry. We show that even with known geometry, stereo matching for specular surfaces is nontrivial because points in one eye may have zero, one, or multiple matches in the other eye. Matching features typically yield skew (nonintersecting) rays, leading to substantial ortho-epipolar components to the disparities, which makes deriving depth values from matches nontrivial. We suggest that the human visual system may base its depth estimates solely on the epipolar components of disparities while treating the ortho-epipolar components as a measure of the underlying reliability of the disparity signals. Reconstructing virtual surfaces according to these principles reveals that they are piece-wise smooth with very large discontinuities close to inflection points on the physical surface. Together, these distinctive characteristics lead to cues that the visual system could use to diagnose specular reflections from binocular information. PMID:25540263

  18. Active vision system for planning and programming of industrial robots in one-of-a-kind manufacturing

    NASA Astrophysics Data System (ADS)

    Berger, Ulrich; Schmidt, Achim

    1995-10-01

    The aspects of automation technology in industrial one-of-a-kind manufacturing are discussed. An approach to improve the quality and cost relation is developed and an overview of an 3D- vision supported automation system is given. This system is based on an active vision sensor for 3D-geometry feedback. Its measurement principle, the coded light approach, is explained. The experimental environment for the technical validation of the automation approach is demonstrated, where robot based processes (assembly, arc welding and flame cutting) are graphically simulated and off-line programmed. A typical process sequence for automated one- of-a-kind manufacturing is described. The results of this research development are applied to a project on the automated disassembling of car parts for recycling using industrial robots.

  19. Low Vision

    MedlinePlus

    ... Cases of Low Vision (in thousands) by Age, Gender, and Race/Ethnicity Table for 2010 U.S. Prevalent ... Cases of Low Vision (in thousands) by Age, Gender, and Race/Ethnicity Table for 2000 U.S. Prevalent ...

  20. Stereo reconstruction from multiperspective panoramas.

    PubMed

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation. PMID:15382685

  1. GARGOYLE: An environment for real-time, context-sensitive active vision

    SciTech Connect

    Prokopowicz, P.N.; Swain, M.J.; Firby, R.J.; Kahn, R.E.

    1996-12-31

    Researchers in robot vision have access to several excellent image processing packages (e.g., Khoros, Vista, Susan, MIL, and X Vision to name only a few) as a base for any new vision software needed in most navigation and recognition tasks. Our work in automonous robot control and human-robot interaction, however, has demanded a new level of run-time flexibility and performance: on-the-fly configuration of visual routines that exploit up-to-the-second context from the task, image, and environment. The result is Gargoyle: an extendible, on-board, real-time vision software package that allows a robot to configure, parameterize, and execute image-processing pipelines at run-time. Each operator in a pipeline works at a level of resolution and over regions of interest that are computed by upstream operators or set by the robot according to task constraints. Pipeline configurations and operator parameters can be stored as a library of visual methods appropriate for different sensing tasks and environmental conditions. Beyond this, a robot may reason about the current task and environmental constraints to construct novel visual routines that are too specialized to work under general conditions, but that are well-suited to the immediate environment and task. We use the RAP reactive plan-execution system to select and configure pre-compiled processing pipelines, and to modify them for specific constraints determined at run-time.

  2. Lidar multi-range integrated Dewar assembly (IDA) for active-optical vision navigation sensor

    NASA Astrophysics Data System (ADS)

    Mayner, Philip; Clemet, Ed; Asbrock, Jim; Chen, Isabel; Getty, Jonathan; Malone, Neil; De Loo, John; Giroux, Mark

    2013-09-01

    A multi-range focal plane was developed and delivered by Raytheon Vision Systems for a docking system that was demonstrated on STS-134. This required state of the art focal plane and electronics synchronization to capture nanosecond length laser pulses to determine ranges with an accuracy of less than 1 inch.

  3. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  4. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  5. Compact stereo endoscopic camera using microprism arrays.

    PubMed

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses. PMID:26977690

  6. Estimation of inferior-superior vocal fold kinematics from high-speed stereo endoscopic data in vivo.

    PubMed

    Sommer, David E; Tokuda, Isao T; Peterson, Sean D; Sakakibara, Ken-Ichi; Imagawa, Hiroshi; Yamauchi, Akihito; Nito, Takaharu; Yamasoba, Tatsuya; Tayama, Niro

    2014-12-01

    Despite being an indispensable tool for both researchers and clinicians, traditional endoscopic imaging of the human vocal folds is limited in that it cannot capture their inferior-superior motion. A three-dimensional reconstruction technique using high-speed video imaging of the vocal folds in stereo is explored in an effort to estimate the inferior-superior motion of the medial-most edge of the vocal folds under normal muscle activation in vivo. Traditional stereo-matching algorithms from the field of computer vision are considered and modified to suit the specific challenges of the in vivo application. Inferior-superior motion of the medial vocal fold surface of three healthy speakers is reconstructed over one glottal cycle. The inferior-superior amplitude of the mucosal wave is found to be approximately 13 mm for normal modal voice, reducing to approximately 3 mm for strained falsetto voice, with uncertainty estimated at σ ≈ 2 mm and σ ≈ 1 mm, respectively. Sources of error, and their relative effects on the estimation of the inferior-superior motion, are considered and recommendations are made to improve the technique. PMID:25480074

  7. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  8. Subjective evaluations of multiple three-dimensional displays by a stereo-deficient viewer: an interesting case study

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Ellis, Sharon A.; Harrington, Lawrence K.; Havig, Paul R.

    2014-06-01

    A study was conducted with sixteen observers evaluating four different three-dimensional (3D) displays for usability, quality, and physical comfort. One volumetric display and three different stereoscopic displays were tested. The observers completed several different types of questionnaires before, during and after each test session. All observers were tested for distance acuity, color vision, and stereoscopic acuity. One observer in particular appeared to have either degraded or absent binocular vision on the stereo acuity test. During the subjective portions of the data collection, this observer showed no obvious signs of depth perception problems and finished the study with no issues reported. Upon further post-hoc stereovision testing of this observer, we discovered that he essentially failed all tests requiring depth judgments of fine disparity and had at best only gross levels of stereoscopic vision (failed all administered stereoacuity threshold tests, testing up to about 800 arc sec of disparity). When questioned about this, the stereo-deficiency was unknown to the observer, who reported having seen several stereoscopic 3D movies (and enjoyed the 3D experiences). Interestingly, we had collected subjective reports about the quality of three-dimensional imagery across multiple stereoscopic displays from a person with deficient stereo-vision. We discuss the participant's unique pattern of results and compare and contrast these results with the other stereo-normal participants. The implications for subjective measurements on stereoscopic three-dimensional displays and for subjective display measurement in general are considered.

  9. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome. PMID:16366195

  10. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  11. Vision and Motion Pictures.

    ERIC Educational Resources Information Center

    Grambo, Gregory

    1998-01-01

    Presents activities on persistence of vision that involve students in a hands-on approach to the study of early methods of creating motion pictures. Students construct flip books, a Zoetrope, and an early movie machine. (DDR)

  12. Stereo imaging based particle velocimeter

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1994-01-01

    Three dimensional coordinates of an object are determined from it's two dimensional images for a class of points on the object. Two dimensional images are first filtered by a Laplacian of Gaussian (LOG) filter in order to detect a set of feature points on the object. The feature points on the left and the right images are then matched using a Hopfield type optimization network. The performance index of the Hopfield network contains both local and global properties of the images. Parallel computing in stereo matching can be achieved by the proposed methodology.

  13. Stereo matching using Hebbian learning.

    PubMed

    Pajares, G; Cruz, J M; Lopez-Orozco, J A

    1999-01-01

    This paper presents an approach to the local stereo matching problem using edge segments as features with several attributes. We have verified that the differences in attributes for the true matches cluster in a cloud around a center. The correspondence is established on the basis of the minimum distance criterion, computing the Mahalanobis distance between the difference of the attributes for a current pair of features and the cluster center (similarity constraint). We introduce a learning strategy based on the Hebbian Learning to get the best cluster center. A comparative analysis among methods without learning and with other learning strategies is illustrated. PMID:18252332

  14. The STEREO Mission: An Overview

    NASA Astrophysics Data System (ADS)

    Kaiser, M. L.

    2004-12-01

    In February 2006, NASA will launch the twin STEREO spacecraft from Kennedy Space Center aboard a Delta 7925 launch vehicle. After a series of highly eccentric Earth orbits with apogees beyond the moon, each spacecraft will use close flybys of the moon to escape into heliocentric orbits at 1 AU, with one spacecraft trailing Earth and the other leading Earth. As viewed from the sun, the two spacecraft will separate at approximately 45 degrees per year. The purposes of the STEREO Mission are to understand the causes and mechanisms of CME initiation and to follow the propagation of CMEs through the heliosphere. Additionally, STEREO will study the mechanisms and sites of energetic particle acceleration and determine 3-D time-dependent traces of the magnetic topology, temperature, density and velocity of the solar wind between the sun and Earth. To accomplish these goals, each STEREO spacecraft will be equipped with set of optical, radio and in situ particles and fields instruments. The SECCHI suite of instruments includes two white light coronagraphs covering the range from 1.4 to 15 solar radii, an extreme ultra violet imager covering the chromosphere and inner corona, and two heliospheric white light imagers covering the outer corona from 12 solar radii to 1 AU. The IMPACT suite of instruments will measure in situ solar wind electrons in the energy range from essentially 0 to 100 keV, energetic electrons to 6 MeV, and protons and heavier ions to 100 MeV/nucleon. IMPACT also contains a magnetometer to measure the in situ magnetic field strength and direction. The PLASTIC instrument will measure the composition of heavy ions as well as protons and alpha particles. The SWAVES instrument will use radio waves to track the location of CME-driven shocks and the 3-D topology of open field lines along which energetic particles flow. Additionally, SWAVES will measure in situ plasma waves to provide an independent estimate of the local plasma density and temperature. Each of the

  15. Automatic harvesting of asparagus: an application of robot vision to agriculture

    NASA Astrophysics Data System (ADS)

    Grattoni, Paolo; Cumani, Aldo; Guiducci, Antonio; Pettiti, Giuseppe

    1994-02-01

    This work presents a system for the automatic selective harvesting of asparagus in open field being developed in the framework of the Italian National Project on Robotics. It is composed of a mobile robot, equipped with a suitable manipulator, and driven by a stereo-vision module. In this paper we discuss in detail the problems related to the vision module.

  16. Stereo Pair, Patagonia, Argentina

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This view of northern Patagonia, at Los Menucos, Argentina shows remnants of relatively young volcanoes built upon an eroded plain of much older and contorted volcanic, granitic, and sedimentary rocks. The large purple, brown, and green 'butterfly' pattern is a single volcano that has been deeply eroded. Large holes on the volcano's flanks indicate that they may have collapsed soon after eruption, as fluid molten rock drained out from under its cooled and solidified outer shell. At the upper left, a more recent eruption occurred and produced a small volcanic cone and a long stream of lava, which flowed down a gully. At the top of the image, volcanic intrusions permeated the older rocks resulting in a chain of small dark volcanic peaks. At the top center of the image, two halves of a tan ellipse pattern are offset from each other. This feature is an old igneous intrusion that has been split by a right-lateral fault. The apparent offset is about 6.6 kilometers (4 miles). Color, tonal, and topographic discontinuities reveal the fault trace as it extends across the image to the lower left. However, young unbroken basalt flows show that the fault has not been active recently.

    This cross-eyed stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with an enhanced Landsat 7satellite color image. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. In doing so, each point in the image is shifted slightly, depending on its elevation. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.

    Landsat satellites have provided visible light and infrared images of the Earth continuously since 1972. SRTM topographic data match the 30-meter (99-foot) spatial resolution of most Landsat images and provide a valuable complement for studying the historic and growing Landsat data archive

  17. All Vision Impairment

    MedlinePlus

    ... Jobs Home > Statistics and Data > All Vision Impairment All Vision Impairment Vision Impairment Defined Vision impairment is ... being blind by the U.S. definition.) The category “All Vision Impairment” includes both low vision and blindness. ...

  18. Low Vision FAQs

    MedlinePlus

    ... Jobs Home > Low Vision > Low Vision FAQs Healthy Vision Diabetes Diabetes Home How Much Do You Know? ... los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is ...

  19. Living with vision loss

    MedlinePlus

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  20. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2006-12-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Also, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross-spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  1. Developments in StereoJet technology

    NASA Astrophysics Data System (ADS)

    Scarpetti, Julius J.; DuBois, Philip M.; Friedhoff, Richard M.; Walworth, Vivian K.

    2000-05-01

    We describe here advances in the development of the StereoJet process, which provides stereoscopic hardcopy comprising paired back-to-back digital images produced by inkjet printing. The polarizing images are viewed with conventional 3D glasses. Image quality has benefitted greatly from advances in inkjet printing technology. Recent innovations include simplified antighosting procedures; precision pin registration; and production of large format display images. Applications include stills from stereoscopic motion pictures, molecule modeling, stereo microscopy, medical imaging, CAD imaging, computer-generated art, and pictorial stereo photography. Accelerated aging test indicate longevity of StereoJet images in the range 35- 100 years. The commercial introduction of custom StereoJet through licensed service bureaus was initiated in 1999.

  2. Stereo Correspondence Using Moment Invariants

    NASA Astrophysics Data System (ADS)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  3. STEREO Spies McNaught

    NASA Technical Reports Server (NTRS)

    2007-01-01

    An instrument on one of the two new STEREO spacecraft captured an unprecedented view of the brightest comet of the last 40 years. Positioned out in space ahead of the Earth as its orbits the Sun, it had a ringside seat on the very brilliant Comet McNaught. The SECCHI/HI-1A instrument on the NASA STEREO-A (Ahead) spacecraft took the frames for this spectacular video during the period of January 11- 18, 2007. (The still shows the comet on January 17.) The full field of view of the HI instrument (a wide-angle sky imager) is centered at about 14 degrees from Sun's center and is 20 degrees wide. The comet tail is approximately 7 degrees in length and shows multiple rays. The image shows the comet tail in spectacular detail, especially once the bright comet head left the field of view and stopped saturating the images. These images are very likely the most detailed images ever taken of a comet while it is very close (0.17 Astronomical Units, which is even closer than Mercury) to the Sun. It has been described by one experienced comet scientist as 'one of, if not the most, beautiful uninterrupted sequence of images of a comet ever made.' Also visible in these movies is Venus (bright object left of center at the bottom) and Mercury (appears from the right later in the sequence). Even their brightness creates saturation streaks on the very sensitive imager.

  4. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  5. Three-dimensional stereo by photometric ratios

    SciTech Connect

    Wolff, L.B.; Angelopoulou, E.

    1994-11-01

    We present a methodology for corresponding a dense set of points on an object surface from photometric values for three-dimensional stereo computation of depth. The methodology utilizes multiple stereo pairs of images, with each stereo pair being taken of the identical scene but under different illumination. With just two stereo pairs of images taken under two different illumination conditions, a stereo pair of ratio images can be produced, one for the ratio of left-hand images and one for the ratio of right-hand images. We demonstrate how the photometric ratios composing these images can be used for accurate correspondence of object points. Object points having the same photometric ratio with respect to two different illumination conditions constitute a well-defined equivalence class of physical constraints defined by local surface orientation relative to illumination conditions. We formally show that for diffuse reflection the photometric ratio is invariant to varying camera characteristics, surface albedo, and viewpoint and that therefore the same photometric ratio in both images of a stereo pair implies the same equivalence class of physical constraints. The correspondence of photometric ratios along epipolar lines in a stereo pair of images under different illumination conditions is a correspondence of equivalent physical constraints, and the determination of depth from stereo can be performed. Whereas illumination planning is required, our photometric-based stereo methodology does not require knowledge of illumination conditions in the actual computation of three-dimensional depth and is applicable to perspective views. This technique extends the stereo determination of three-dimensional depth to smooth featureless surfaces without the use of precisely calibrated lighting. We demonstrate experimental depth maps from a dense set of points on smooth objects of known ground-truth shape, determined to within 1% depth accuracy.

  6. Scientific Potential of The Stereo Impact Investigation

    NASA Astrophysics Data System (ADS)

    Luhmann, J. G.; Gosling, J. T.; Impact Investigation Team

    The IMPACT (In-situ Measurements of Particles and CME Transients) investigation on STEREO is designed to characterize the 1 AU signatures of CMEs detected by the SECCHI imager and SWAVES radio experiments, and to help infer the structure of the solar wind into which those disturbances propagate. Its measurements cover both thermal and suprathermal electrons, solar energetic particles (SEP), and the interplane- tary magnetic field. Each IMPACT fluxgate magnetometer supplies full magnetic field vectors, while each plasma electron analyzer gives nearly 4pi coverage of the solar wind electrons, including the heat flux anisotropy that can be used to determine the magnetic topology of interplanetary structures. The Supra- thermal electron detectors add the additional capability of inferring field line length and local field connection to flaring active regions. The SEP four-instrument package spans the entire range from suprathermal ions to ~100 MeV protons, including ion composition and directional information for both remote-sensing shock location and shape, and determining SEP maximum fluxes in cases where there is considerable anisotropy not along the nominal Parker Spiral direction. Together with UNH's PLASTIC solar wind ion composition investigation, IMPACT thus constitutes the most complete in-situ instrument pack- age flown to date with a comprehensive solar imaging system. Both multipoint and quadrature- style uses of the IMPACT data are anticipated. When coupled with the planned state-of-the-art simulations, the combined STEREO measurements are ex- pected to reveal how the CME-driven events detected in-situ at 1 AU in the solar wind plasma, magnetic fields, and energetic particles relate to the what has occurred on the Sun.

  7. Leading Vision

    ERIC Educational Resources Information Center

    Fawcett, Gay

    2004-01-01

    The current educational landscape makes it imperative that a vision statement become more than a fine-sounding statement that is laminated, hung on the wall, and quickly forgotten. If educators do not have a clear image of the future they wish to create, then someone will be ready to create it for them. But with a clear vision of the future, a…

  8. Learning Visions.

    ERIC Educational Resources Information Center

    Phelps, Margaret S.; And Others

    This paper describes LEARNing Visions, a K-12 intervention program for at-risk youth in Jackson County, Tennessee, involving a partnership between the schools, local businesses, Tennessee Technological University, and Visions Five (a private company). Jackson County is characterized by an undereducated population, a high employment rate, and a low…

  9. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    NASA Astrophysics Data System (ADS)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  10. High-Order Energies for Stereo Segmentation.

    PubMed

    Peng, Jianteng; Shen, Jianbing; Li, Xuelong

    2016-07-01

    In this paper, we propose a novel segmentation approach for stereo images using the high-order energy optimization, which utilizes the disparity maps and statistical information of stereo images to enrich the high-order potential functions. To the best of our knowledge, our approach is the first one to formulate the problem of stereo segmentation as a high-order energy optimization problem, which simultaneously segments the foreground objects in left and right images using the proposed high-order potential function. A new method for designing the penalty function in our high-order term is proposed by the corresponding pixels and their neighboring pixels between left and right images. The relationships of stereo correspondence by disparity maps are further employed to enhance the connections between the left and right stereo images. Experimental results demonstrate that the proposed approach can effectively improve the performance of two kinds of stereo segmentation, including the automatic saliency-aware stereocut and the interactive stereo segmentation with user scribbles. PMID:26208377

  11. Recovery of stereo acuity in adults with amblyopia

    PubMed Central

    Astle, Andrew T; McGraw, Paul V; Webb, Ben S

    2011-01-01

    Disruption of visual input to one eye during early development leads to marked functional impairments of vision, commonly referred to as amblyopia. A major consequence of amblyopia is the inability to encode binocular disparity information leading to impaired depth perception or stereo acuity. If amblyopia is treated early in life (before 4 years of age), then recovery of normal stereoscopic function is possible. Treatment is rarely undertaken later in life (adulthood) because declining levels of neural plasticity are thought to limit the effectiveness of standard treatments. Here, the authors show that a learning-based therapy, designed to exploit experience-dependent plastic mechanisms, can be used to recover stereoscopic visual function in adults with amblyopia. These cases challenge the long-held dogma that the critical period for visual development and the window for treating amblyopia are one and the same. PMID:22707543

  12. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  13. Attentional selection of location and modality in vision and touch modulates low-frequency activity in associated sensory cortices

    PubMed Central

    Kennett, Steffan; Driver, Jon

    2012-01-01

    Selective attention allows us to focus on particular sensory modalities and locations. Relatively little is known about how attention to a sensory modality may relate to selection of other features, such as spatial location, in terms of brain oscillations, although it has been proposed that low-frequency modulation (α- and β-bands) may be key. Here, we investigated how attention to space (left or right) and attention to modality (vision or touch) affect ongoing low-frequency oscillatory brain activity over human sensory cortex. Magnetoencephalography was recorded while participants performed a visual or tactile task. In different blocks, touch or vision was task-relevant, whereas spatial attention was cued to the left or right on each trial. Attending to one or other modality suppressed α-oscillations over the corresponding sensory cortex. Spatial attention led to reduced α-oscillations over both sensorimotor and occipital cortex contralateral to the attended location in the cue-target interval, when either modality was task-relevant. Even modality-selective sensors also showed spatial-attention effects for both modalities. The visual and sensorimotor results were generally highly convergent, yet, although attention effects in occipital cortex were dominant in the α-band, in sensorimotor cortex, these were also clearly present in the β-band. These results extend previous findings that spatial attention can operate in a multimodal fashion and indicate that attention to space and modality both rely on similar mechanisms that modulate low-frequency oscillations. PMID:22323628

  14. Opportunity at 'Cook Islands' (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11854 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11854

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,825th Martian day, or sol, of Opportunity's surface mission (March 12, 2009). North is at the top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven half a meter (1.5 feet) earlier on Sol 1825 to fine-tune its location for placing its robotic arm onto an exposed patch of outcrop including a target area informally called 'Cook Islands.' On the preceding sol, Opportunity turned around to drive frontwards and then drove 4.5 meters (15 feet) toward this outcrop. The tracks from the SOl 1824 drive are visible near the center of this view at about the 11 o'clock position. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Opportunity had previously been driving backward as a strategy to redistribute lubrication in a wheel drawing more electrical current than usual.

    The outcrop exposure that includes 'Cook Islands' is visible just below the center of the image.

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  15. Phoenix Lander on Mars (Stereo)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Phoenix Mars Lander monitors the atmosphere overhead and reaches out to the soil below in this stereo illustration of the spacecraft fully deployed on the surface of Mars. The image appears three-dimensional when viewed through red-green stereo glasses.

    Phoenix has been assembled and tested for launch in August 2007 from Cape Canaveral Air Force Station, Fla., and for landing in May or June 2008 on an arctic plain of far-northern Mars. The mission responds to evidence returned from NASA's Mars Odyssey orbiter in 2002 indicating that most high-latitude areas on Mars have frozen water mixed with soil within arm's reach of the surface.

    Phoenix will use a robotic arm to dig down to the expected icy layer. It will analyze scooped-up samples of the soil and ice for factors that will help scientists evaluate whether the subsurface environment at the site ever was, or may still be, a favorable habitat for microbial life. The instruments on Phoenix will also gather information to advance understanding about the history of the water in the icy layer. A weather station on the lander will conduct the first study Martian arctic weather from ground level.

    The vertical green line in this illustration shows how the weather station on Phoenix will use a laser beam from a lidar instrument to monitor dust and clouds in the atmosphere. The dark 'wings' to either side of the lander's main body are solar panels for providing electric power.

    The Phoenix mission is led by Principal Investigator Peter H. Smith of the University of Arizona, Tucson, with project management at NASA's Jet Propulsion Laboratory and development partnership with Lockheed Martin Space Systems, Denver. International contributions for Phoenix are provided by the Canadian Space Agency, the University of Neuchatel (Switzerland), the University of Copenhagen (Denmark), the Max Planck Institute (Germany) and the Finnish Meteorological institute. JPL is a division of the California

  16. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W., III; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  17. Tracking of Human Groups Using Subtraction Stereo

    NASA Astrophysics Data System (ADS)

    Hoshikawa, Yuma; Hashimoto, Yuki; Moro, Alessandro; Terabayashi, Kenji; Umeda, Kazunori

    In this paper, we propose a method for tracking groups of people using three-dimensional (3D) feature points obtained with use of the Kanade-Lucas-Tomasi feature tracker (KLT) method and a stereo camera system called “Subtraction stereo”. The tracking system using subtraction stereo, which focuses its stereo matching algorithm to foreground regions obtained by background subtraction, is realized using Kalman filter based tracker. The effectiveness of the proposed method is verified using 3D scenes of people walking, which are difficult to track.

  18. Parallel vision algorithms. Annual technical report No. 1, 1 October 1986-30 September 1987

    SciTech Connect

    Ibrahim, H.A.; Kender, J.R.; Brown, L.G.

    1987-10-01

    The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms will immediately serve as front-ends for autonomous land vehicle navigation systems. During the first year of the project, efforts have concentrated on two fronts. First, developing and testing the parallel programming environment that will be used to develop, implement and test the parallel vision algorithms. Second, developing and testing multi-resolution stereo, and texture algorithms. This report describes the status and progress on these two fronts. The authors describe first the programming environment developed, and mapping scheme that allows efficient use of the connection machine for pyramid (multi-resolution) algorithms. Second, they present algorithms and test results for multi-resolution stereo, and texture algorithms. Also the initial results of the starting efforts of integrating stereo and texture algorithms are presented.

  19. Computational vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  20. Dynamic Programming and Graph Algorithms in Computer Vision*

    PubMed Central

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  1. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  2. Aug 1 Solar Event From STEREO Ahead

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft leading the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extreme ultr...

  3. STEREO as a "Planetary Hazards" Mission

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Thompson, B. J.

    2014-01-01

    NASA's twin STEREO probes, launched in 2006, have advanced the art and science of space weather forecasting more than any other spacecraft or solar observatory. By surrounding the Sun, they provide previously-impossible early warnings of threats approaching Earth as they develop on the solar far side. They have also revealed the 3D shape and inner structure of CMEs-massive solar storms that can trigger geomagnetic storms when they collide with Earth. This improves the ability of forecasters to anticipate the timing and severity of such events. Moreover, the unique capability of STEREO to track CMEs in three dimensions allows forecasters to make predictions for other planets, giving rise to the possibility of interplanetary space weather forecasting too. STEREO is one of those rare missions for which "planetary hazards" refers to more than one world. The STEREO probes also hold promise for the study of comets and potentially hazardous asteroids.

  4. STEREO Witnesses Aug 1, 2010 Solar Event

    NASA Video Gallery

    These image sequences were taken by the twin STEREO spacecraft looking at the Sun from opposite sides. The bottom pair shows the Sun and its immediate surroundings. The top row shows events from th...

  5. Aug 1 Solar Event From STEREO Behind

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft trailing behind the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extr...

  6. STEREO Observations of Solar Energetic Particles

    NASA Technical Reports Server (NTRS)

    vonRosenvinge, Tycho; Christian, Eric; Cohen, Christina; Leske, Richard; Mewaldt, Richard; Stone, Edward; Wiedenbeck, Mark

    2011-01-01

    We report on observations of Solar Energetic Particle (SEP) events as observed by instruments on the STEREO Ahead and Behind spacecraft and on the ACE spacecraft. We will show observations of an electron event observed by the STEREO Ahead spacecraft on June 12, 2010 located at W74 essentially simultaneously with electrons seen at STEREO Behind at E70. Some similar events observed by Helios were ascribed to fast electron propagation in longitude close to the sun. We will look for independent verification of this possibility. We will also show observations of what appears to be a single proton event with very similar time-history profiles at both of the STEREO spacecraft at a similar wide separation. This is unexpected. We will attempt to understand all of these events in terms of corresponding CME and radio burst observations.

  7. Solar Coronal Cells as Seen by STEREO

    NASA Video Gallery

    The changes of a coronal cell region as solar rotation carries it across the solar disk as seen with NASA's STEREO-B spacecraft. The camera is fixed on the region (panning with it) and shows the pl...

  8. Detail-Preserving and Content-Aware Variational Multi-View Stereo Reconstruction.

    PubMed

    Li, Zhaoxin; Wang, Kuanquan; Zuo, Wangmeng; Meng, Deyu; Zhang, Lei

    2016-02-01

    Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware ℓp-minimization algorithm by adaptively estimating the p value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy. PMID:26672037

  9. Impact on stereo-acuity of two presbyopia correction approaches: monovision and small aperture inlay

    PubMed Central

    Fernández, Enrique J.; Schwarz, Christina; Prieto, Pedro M.; Manzanera, Silvestre; Artal, Pablo

    2013-01-01

    Some of the different currently applied approaches that correct presbyopia may reduce stereovision. In this work, stereo-acuity was measured for two methods: (1) monovision and (2) small aperture inlay in one eye. When performing the experiment, a prototype of a binocular adaptive optics vision analyzer was employed. The system allowed simultaneous measurement and manipulation of the optics in both eyes of a subject. The apparatus incorporated two programmable spatial light modulators: one phase-only device using liquid crystal on silicon technology for wavefront manipulation and one intensity modulator for controlling the exit pupils. The prototype was also equipped with a stimulus generator for creating retinal disparity based on two micro-displays. The three-needle test was programmed for characterizing stereo-acuity. Subjects underwent a two-alternative forced-choice test. The following cases were tested for the stimulus placed at distance: (a) natural vision; (b) 1.5 D monovision; (c) 0.75 D monovision; (d) natural vision and small pupil; (e) 0.75 D monovision and small pupil. In all cases the standard pupil diameter was 4 mm and the small pupil diameter was 1.6 mm. The use of a small aperture significantly reduced the negative impact of monovision on stereopsis. The results of the experiment suggest that combining micro-monovision with a small aperture, which is currently being implemented as a corneal inlay, can yield values of stereoacuity close to those attained under normal binocular vision. PMID:23761846

  10. Improving Vision

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Many people are familiar with the popular science fiction series Star Trek: The Next Generation, a show featuring a blind character named Geordi La Forge, whose visor-like glasses enable him to see. What many people do not know is that a product very similar to Geordi's glasses is available to assist people with vision conditions, and a NASA engineer's expertise contributed to its development. The JORDY(trademark) (Joint Optical Reflective Display) device, designed and manufactured by a privately-held medical device company known as Enhanced Vision, enables people with low vision to read, write, and watch television. Low vision, which includes macular degeneration, diabetic retinopathy, and glaucoma, describes eyesight that is 20/70 or worse, and cannot be fully corrected with conventional glasses.

  11. Vision problems

    MedlinePlus

    ... in dealing with eye emergencies if: You experience partial or complete blindness in one or both eyes, ... a family history of diabetes Eye itching or discharge Vision changes that seem related to medication (DO ...

  12. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  13. Laser gated viewing at ISL for vision through smoke, active polarimetry, and 3D imaging in NIR and SWIR wavelength bands

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank

    2013-12-01

    In this article, we want to give a review on the application of laser gated viewing for the improvement of vision cross-diffusing obstacles (smoke, turbid medium, …), the capturing of 3D scene information, or the study of material properties by polarimetric analysis at near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths. Laser gated viewing has been studied since the 1960s as an active night vision method. Owing to enormous improvements in the development of compact and highly efficient laser sources and in the development of modern sensor technologies, the maturity of demonstrator systems rose during the past decades. Further, it was demonstrated that laser gated viewing has versatile sensing capabilities with application for long-range observation under certain degraded weather conditions, vision through obstacles and fog, active polarimetry, and 3D imaging.

  14. Preparing WIND for the STEREO Mission

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.

    2006-05-01

    The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  15. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2007-05-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Static summary plots and a key-parameter type data set with a related online browser provide alternative data access. Finally, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross- spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  16. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  17. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  18. Non-linearity analysis of depth and angular indexes for optimal stereo SLAM.

    PubMed

    Bergasa, Luis M; Alcantarilla, Pablo F; Schleicher, David

    2010-01-01

    In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3-5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented. PMID:22319348

  19. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps.

    PubMed

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel's scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other "fused" algorithms in the aspect of precision. PMID:26308003

  20. Transform coding of stereo image residuals.

    PubMed

    Moellenhoff, M S; Maier, M W

    1998-01-01

    Stereo image compression is of growing interest because of new display technologies and the needs of telepresence systems. Compared to monoscopic image compression, stereo image compression has received much less attention. A variety of algorithms have appeared in the literature that make use of the cross-view redundancy in the stereo pair. Many of these use the framework of disparity-compensated residual coding, but concentrate on the disparity compensation process rather than the post compensation coding process. This paper studies specialized coding methods for the residual image produced by disparity compensation. The algorithms make use of theoretically expected and experimentally observed characteristics of the disparity-compensated stereo residual to select transforms and quantization methods. Performance is evaluated on mean squared error (MSE) and a stereo-unique metric based on image registration. Exploiting the directional characteristics in a discrete cosine transform (DCT) framework provides its best performance below 0.75 b/pixel for 8-b gray-scale imagery and below 2 b/pixel for 24-b color imagery, In the wavelet algorithm, roughly a 50% reduction in bit rate is possible by encoding only the vertical channel, where much of the stereo information is contained. The proposed algorithms do not incur substantial computational burden beyond that needed for any disparity-compensated residual algorithm. PMID:18276294

  1. Defining the V5/MT neuronal pool for perceptual decisions in a visual stereo-motion task.

    PubMed

    Krug, Kristine; Curnow, Tamara L; Parker, Andrew J

    2016-06-19

    In the primate visual cortex, neurons signal differences in the appearance of objects with high precision. However, not all activated neurons contribute directly to perception. We defined the perceptual pool in extrastriate visual area V5/MT for a stereo-motion task, based on trial-by-trial co-variation between perceptual decisions and neuronal firing (choice probability (CP)). Macaque monkeys were trained to discriminate the direction of rotation of a cylinder, using the binocular depth between the moving dots that form its front and rear surfaces. We manipulated the activity of single neurons trial-to-trial by introducing task-irrelevant stimulus changes: dot motion in cylinders was aligned with neuronal preference on only half the trials, so that neurons were strongly activated with high firing rates on some trials and considerably less activated on others. We show that single neurons maintain high neurometric sensitivity for binocular depth in the face of substantial changes in firing rate. CP was correlated with neurometric sensitivity, not level of activation. In contrast, for individual neurons, the correlation between perceptual choice and neuronal activity may be fundamentally different when responding to different stimulus versions. Therefore, neuronal pools supporting sensory discrimination must be structured flexibly and independently for each stimulus configuration to be discriminated.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269603

  2. Defining the V5/MT neuronal pool for perceptual decisions in a visual stereo-motion task

    PubMed Central

    2016-01-01

    In the primate visual cortex, neurons signal differences in the appearance of objects with high precision. However, not all activated neurons contribute directly to perception. We defined the perceptual pool in extrastriate visual area V5/MT for a stereo-motion task, based on trial-by-trial co-variation between perceptual decisions and neuronal firing (choice probability (CP)). Macaque monkeys were trained to discriminate the direction of rotation of a cylinder, using the binocular depth between the moving dots that form its front and rear surfaces. We manipulated the activity of single neurons trial-to-trial by introducing task-irrelevant stimulus changes: dot motion in cylinders was aligned with neuronal preference on only half the trials, so that neurons were strongly activated with high firing rates on some trials and considerably less activated on others. We show that single neurons maintain high neurometric sensitivity for binocular depth in the face of substantial changes in firing rate. CP was correlated with neurometric sensitivity, not level of activation. In contrast, for individual neurons, the correlation between perceptual choice and neuronal activity may be fundamentally different when responding to different stimulus versions. Therefore, neuronal pools supporting sensory discrimination must be structured flexibly and independently for each stimulus configuration to be discriminated. This article is part of the themed issue ‘Vision in our three-dimensional world'. PMID:27269603

  3. The Hyperspectral Stereo Camera Project

    NASA Astrophysics Data System (ADS)

    Griffiths, A. D.; Coates, A. J.

    2006-12-01

    The MSSL Hyperspectral Stereo Camera (HSC) is developed from Beagle2 stereo camera heritage. Replaceing filter wheels with liquid crystal tuneable filters (LCTF) turns each eye into a compact hyperspectral imager. Hyperspectral imaging is defined here as acquiring 10s-100s of images in 10-20 nm spectral bands. Combined together these bands form an image `cube' (with wavelength as the third dimension) allowing a detailed spectrum to be extracted at any pixel position. A LCTF is conceptually similar to the Fabry-Perot tuneable filter design but instead of physical separation, the variable refractive index of the liquid crystal etalons is used to define the wavelength of interest. For 10 nm bandwidths, LCTFs are available covering the 400-720 nm and 650-1100 nm ranges. The resulting benefits include reduced imager mechanical complexity, no limitation on the number of filter wavelengths available and the ability to change the wavelengths of interest in response to new findings as the mission proceeds. LCTFs are currently commercially available from two US companies - Scientific Solutions Inc. and Cambridge Research Inc. (CRI). CRI distribute the `Varispec' LCTFs used in the HSC. Currently, in Earth orbit hyperspectral imagers can prospect for minerals, detect camouflaged military equipment and determine the species and state of health of crops. Therefore, we believe this instrument shows great promise for a wide range of investigations in the planetary science domain (below). MSSL will integrate and test at representative Martian temperatures the HSC development model (to determine power requirements to prevent the liquid crystals freezing). Additionally, a full radiometric calibration is required to determine the HSC sensitivity. The second phase of the project is to demonstrate (in a ground based lab) the benefit of much higher spectral resolution to the following Martian scientific investigations: - Determination of the mineralogy of rocks and soil - Detection of

  4. Hybrid Image-Plane/Stereo Manipulation

    NASA Technical Reports Server (NTRS)

    Baumgartner, Eric; Robinson, Matthew

    2004-01-01

    Hybrid Image-Plane/Stereo (HIPS) manipulation is a method of processing image data, and of controlling a robotic manipulator arm in response to the data, that enables the manipulator arm to place an end-effector (an instrument or tool) precisely with respect to a target (see figure). Unlike other stereoscopic machine-vision-based methods of controlling robots, this method is robust in the face of calibration errors and changes in calibration during operation. In this method, a stereoscopic pair of cameras on the robot first acquires images of the manipulator at a set of predefined poses. The image data are processed to obtain image-plane coordinates of known visible features of the end-effector. Next, there is computed an initial calibration in the form of a mapping between (1) the image-plane coordinates and (2) the nominal three-dimensional coordinates of the noted end-effector features in a reference frame fixed to the main robot body at the base of the manipulator. The nominal three-dimensional coordinates are obtained by use of the nominal forward kinematics of the manipulator arm that is, calculated by use of the currently measured manipulator joint angles and previously measured lengths of manipulator arm segments under the assumption that the arm segments are rigid, that the arm lengths are constant, and that there is no backlash. It is understood from the outset that these nominal three-dimensional coordinates are likely to contain possibly significant calibration errors, but the effects of the errors are progressively reduced, as described next. As the end-effector is moved toward the target, the calibration is updated repeatedly by use of data from newly acquired images of the end-effector and of the corresponding nominal coordinates in the manipulator reference frame. By use of the updated calibration, the coordinates of the target are computed in manipulator-reference-frame coordinates and then used to the necessary manipulator joint angles to position

  5. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  6. STEREO interplanetary shocks and foreshocks

    SciTech Connect

    Blanco-Cano, X.; Kajdic, P.; Aguilar-Rodriguez, E.; Russell, C. T.; Jian, L. K.; Luhmann, J. G.

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  7. The STEREO Mission: A New Approach to Space Weather Research

    NASA Technical Reports Server (NTRS)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  8. [Evaluation of condition and factors affecting activity effectiveness and visual performance of pilots who use night vision goggles during the helicopter flights].

    PubMed

    Aleksandrov, A S; Davydov, V V; Lapa, V V; Minakov, A A; Sukhanov, V V; Chistov, S D

    2014-07-01

    According to analysis of questionnaire authors revealed factors, which affect activity effectiveness, and visual performance of pilots who use night vision goggles during the helicopter flights. These are: difficulty of flight tasks, flying conditions, illusion of attitude. Authors gave possible ways to reduce an impact of these factors. PMID:25286586

  9. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  10. Depth Imaging by Combining Time-of-Flight and On-Demand Stereo

    NASA Astrophysics Data System (ADS)

    Hahne, Uwe; Alexa, Marc

    In this paper we present a framework for computing depth images at interactive rates. Our approach is based on combining time-of-flight (TOF) range data with stereo vision. We use a per-frame confidence map extracted from the TOF sensor data in two ways for improving the disparity estimation in the stereo part: first, together with the TOF range data for initializing and constraining the disparity range; and, second, together with the color image information for segmenting the data into depth continuous areas, enabling the use of adaptive windows for the disparity search. The resulting depth images are more accurate than from either of the sensors. In an example application we use the depth map to initialize the z-buffer so that virtual objects can be occluded by real objects in an augmented reality scenario.

  11. 3-dimentional measurement of cable configuration being based on feature tracking motion stereo

    NASA Astrophysics Data System (ADS)

    Domae, Yukiyasu; Okuda, Haruhisa; Takauji, Hidenori; Kaneko, Shun'ichi; Tanaka, Takayuki

    2007-10-01

    We propose a novel three-Dimensional measurement approach of flexible cables for factory automation appliations, such as cable handling, connecter insertion without conflicts with cables by using robotic arms. The approach is based on motion stereo with a vision sensor. Laser slit beams are irradiated and make landmalks on the cables to solve stereo correspondence problem efficiently. These landmark points and interpolated points having rich texture are tracked in a image sequence, and reconstructed as the cable shape. For stable feature point tracking, a robust texture matching method which is Orientation Code Matching and tracking stability analysis are applied. In our experiments, arch-like cables have been reconstructed with an uncertainty of 1.5 % by this method.

  12. Vision Problems: How Teachers Can Help.

    ERIC Educational Resources Information Center

    Desrochers, Joyce

    1999-01-01

    Describes common vision problems in young children such as myopia, strabismus, and amblyopia. Presents suggestions for helping children with vision problems in the early childhood classroom and in outdoor activities. Lists related resources and children's books. (KB)

  13. Machine vision is not computer vision

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Charlier, Jean-Ray

    1998-10-01

    The identity of Machine Vision as an academic and practical subject of study is asserted. In particular, the distinction between Machine Vision on the one hand and Computer Vision, Digital Image Processing, Pattern Recognition and Artificial Intelligence on the other is emphasized. The article demonstrates through four cases studies that the active involvement of a person who is sensitive to the broad aspects of vision system design can avoid disaster and can often achieve a successful machine that would not otherwise have been possible. This article is a transcript of the key- note address presented at the conference. Since the proceedings are prepared and printed before the conference, it is not possible to include a record of the response to this paper made by the delegates during the round-table discussion. It is hoped to collate and disseminate these via the World Wide Web after the event. (A link will be provided at http://bruce.cs.cf.ac.uk/bruce/index.html.).

  14. Community Vision and Interagency Alignment: A Community Planning Process to Promote Active Transportation.

    PubMed

    DeGregory, Sarah Timmins; Chaudhury, Nupur; Kennedy, Patrick; Noyes, Philip; Maybank, Aletha

    2016-04-01

    In 2010, the Brooklyn Active Transportation Community Planning Initiative launched in 2 New York City neighborhoods. Over a 2-year planning period, residents participated in surveys, school and community forums, neighborhood street assessments, and activation events-activities that highlighted the need for safer streets locally. Consensus among residents and key multisectoral stakeholders, including city agencies and community-based organizations, was garnered in support of a planned expansion of bicycling infrastructure. The process of building on community assets and applying a collective impact approach yielded changes in the built environment, attracted new partners and resources, and helped to restore a sense of power among residents. PMID:26959270

  15. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  16. Robot vision.

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The work reported had been undertaken to develop hardware that would guide an automatic vehicle in the exploration of the surface of Mars, recognize objects, and report the findings to earth. Approaches for automatically determining the range and shape of objects are discussed, giving attention to the automatic comparison of one scan line of the left with one scan line of the right TV image. Methods of mapping binocular space into match space are considered together with the use of a model in match space, rules for processing random-dot stereograms, the elimination of spurious matches, random-square stereograms, questions of the processing of a real scene, and aspects of range accuracy. Methods of computation for extracting other features are also discussed along with stereo TV cameras and approaches for reconstructing the appearance of a scene.

  17. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.

    PubMed

    Vu, Dung T; Chidester, Benjamin; Yang, Hongsheng; Do, Minh N; Lu, Jiangbo

    2014-08-01

    Estimating dense correspondence or depth information from a pair of stereoscopic images is a fundamental problem in computer vision, which finds a range of important applications. Despite intensive past research efforts in this topic, it still remains challenging to recover the depth information both reliably and efficiently, especially when the input images contain weakly textured regions or are captured under uncontrolled, real-life conditions. Striking a desired balance between computational efficiency and estimation quality, a hybrid minimum spanning tree-based stereo matching method is proposed in this paper. Our method performs efficient nonlocal cost aggregation at pixel-level and region-level, and then adaptively fuses the resulting costs together to leverage their respective strength in handling large textureless regions and fine depth discontinuities. Experiments on the standard Middlebury stereo benchmark show that the proposed stereo method outperforms all prior local and nonlocal aggregation-based methods, achieving particularly noticeable improvements for low texture regions. To further demonstrate the effectiveness of the proposed stereo method, also motivated by the increasing desire to generate expressive depth-induced photo effects, this paper is tasked next to address the emerging application of interactive depth-of-field rendering given a real-world stereo image pair. To this end, we propose an accurate thin-lens model for synthetic depth-of-field rendering, which considers the user-stroke placement and camera-specific parameters and performs the pixel-adapted Gaussian blurring in a principled way. Taking ~1.5 s to process a pair of 640×360 images in the off-line step, our system named Scribble2focus allows users to interactively select in-focus regions by simple strokes using the touch screen and returns the synthetically refocused images instantly to the user. PMID:24919201

  18. Presidential Visions.

    ERIC Educational Resources Information Center

    Gallin, Alice, Ed.

    1992-01-01

    This journal issue is devoted to the theme of university presidents and their visions of the future. It presents the inaugural addresses and speeches of 16 Catholic college and university presidents focusing on their goals, ambitions, and reasons for choosing to become higher education leaders at this particular time in the history of education in…

  19. Visions 2001.

    ERIC Educational Resources Information Center

    Rivero, Victor; Norman, Michele

    2001-01-01

    Reports on the views of 18 educational leaders regarding their vision on the future of education in an information age. Topics include people's diverse needs; relationships between morality, ethics, values, and technology; leadership; parental involvement; online courses from multiple higher education institutions; teachers' role; technology…

  20. Training Visions

    ERIC Educational Resources Information Center

    Training, 2011

    2011-01-01

    In this article, "Training" asks the 2011 winners to give their predictions for what training--either in general or specifically at their companies--will look like in the next five to 10 years. Perhaps their "training visions" will spark some ideas in one's organization--or at least help prepare for what might be coming in the next decade or so.

  1. Agrarian Visions.

    ERIC Educational Resources Information Center

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to solve problems of rural…

  2. Single-neuron activity and eye movements during human REM sleep and awake vision.

    PubMed

    Andrillon, Thomas; Nir, Yuval; Cirelli, Chiara; Tononi, Giulio; Fried, Itzhak

    2015-01-01

    Are rapid eye movements (REMs) in sleep associated with visual-like activity, as during wakefulness? Here we examine single-unit activities (n=2,057) and intracranial electroencephalography across the human medial temporal lobe (MTL) and neocortex during sleep and wakefulness, and during visual stimulation with fixation. During sleep and wakefulness, REM onsets are associated with distinct intracranial potentials, reminiscent of ponto-geniculate-occipital waves. Individual neurons, especially in the MTL, exhibit reduced firing rates before REMs as well as transient increases in firing rate immediately after, similar to activity patterns observed upon image presentation during fixation without eye movements. Moreover, the selectivity of individual units is correlated with their response latency, such that units activated after a small number of images or REMs exhibit delayed increases in firing rates. Finally, the phase of theta oscillations is similarly reset following REMs in sleep and wakefulness, and after controlled visual stimulation. Our results suggest that REMs during sleep rearrange discrete epochs of visual-like processing as during wakefulness. PMID:26262924

  3. Single-neuron activity and eye movements during human REM sleep and awake vision

    PubMed Central

    Andrillon, Thomas; Nir, Yuval; Cirelli, Chiara; Tononi, Giulio; Fried, Itzhak

    2015-01-01

    Are rapid eye movements (REMs) in sleep associated with visual-like activity, as during wakefulness? Here we examine single-unit activities (n=2,057) and intracranial electroencephalography across the human medial temporal lobe (MTL) and neocortex during sleep and wakefulness, and during visual stimulation with fixation. During sleep and wakefulness, REM onsets are associated with distinct intracranial potentials, reminiscent of ponto-geniculate-occipital waves. Individual neurons, especially in the MTL, exhibit reduced firing rates before REMs as well as transient increases in firing rate immediately after, similar to activity patterns observed upon image presentation during fixation without eye movements. Moreover, the selectivity of individual units is correlated with their response latency, such that units activated after a small number of images or REMs exhibit delayed increases in firing rates. Finally, the phase of theta oscillations is similarly reset following REMs in sleep and wakefulness, and after controlled visual stimulation. Our results suggest that REMs during sleep rearrange discrete epochs of visual-like processing as during wakefulness. PMID:26262924

  4. Vision Loss, Sudden

    MedlinePlus

    ... of age-related macular degeneration. Spotlight on Aging: Vision Loss in Older People Most commonly, vision loss ... Some Causes and Features of Sudden Loss of Vision Cause Common Features* Tests Sudden loss of vision ...

  5. Blindness and vision loss

    MedlinePlus

    ... eye ( chemical burns or sports injuries) Diabetes Glaucoma Macular degeneration The type of partial vision loss may differ, ... tunnel vision and missing areas of vision With macular degeneration, the side vision is normal but the central ...

  6. Three-dimensional display: stereo and beyond

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Allen, Daniel J.

    2008-03-01

    With the advent of large, high-quality stereo display monitors and high-volume 3-D image acquisition sources, it is time to revisit the use of 3-D display for diagnostic radiology. Stereo displays may be goggled, or goggleless. Goggleless displays are called autostereographic displays. We concentrate on autostereographic technologies. Commercial LCD flat-screen 3-D autostereographic monitors typically rely on one of two techniques: blocked perspective and integral display. On the acquisition modality side: MRI, CT and 3-D ultrasound provide 3-D data sets. However, helical/spiral CT with multi-row detectors and multiple x-ray sources provides a monsoon of data. Presenting and analyzing this large amount of potentially dynamic data will require advanced presentation techniques. We begin with a very brief review the two stereo-display technologies. These displays are evolving beyond presentation of the traditional pair of views directed to fixed positions of the eyes to multi-perspective displays; at differing head positions, the eyes are presented with the proper perspective pairs corresponding to viewing a 3-D object from that position. In addition, we will look at some of the recent developments in computer-generated holograms or CGH's. CGH technology differs from the other two technologies in that it provides a wave-optically correct reproduction of the object. We then move to examples of stereo-displayed medical images and examine some of the potential strengths and weaknesses of the displays. We have installed a commercial stereo-display in our laboratory and are in the process of generating stereo-pairs of CT data. We are examining, in particular, preprocessing of the perspective data.

  7. Integrating National Space Visions

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    2006-01-01

    This paper examines value proposition assumptions for various models nations may use to justify, shape, and guide their space programs. Nations organize major societal investments like space programs to actualize national visions represented by leaders as investments in the public good. The paper defines nine 'vision drivers' that circumscribe the motivations evidently underpinning national space programs. It then describes 19 fundamental space activity objectives (eight extant and eleven prospective) that nations already do or could in the future use to actualize the visions they select. Finally the paper presents four contrasting models of engagement among nations, and compares these models to assess realistic pounds on the pace of human progress in space over the coming decades. The conclusion is that orthogonal engagement, albeit unlikely because it is unprecedented, would yield the most robust and rapid global progress.

  8. Flexible dynamic measurement method of three-dimensional surface profilometry based on multiple vision sensors.

    PubMed

    Liu, Zhen; Li, Xiaojing; Li, Fengjiao; Zhang, Guangjun

    2015-01-12

    Single vision sensor cannot measure an entire object because of their limited field of view. Meanwhile, multiple rigidly-fixed vision sensors for the dynamic vision measurement of three-dimensional (3D) surface profilometry are complex and sensitive to strong environmental vibrations. To overcome these problems, a novel flexible dynamic measurement method for 3D surface profilometry based on multiple vision sensors is presented in this paper. A raster binocular stereo vision sensor is combined with a wide-field camera to produce a 3D optical probe. Multiple 3D optical probes are arranged around the object being measured, then many planar targets are set up. These planar targets function as the mediator to integrate the local 3D data measured by the raster binocular stereo vision sensors into the coordinate system. The proposed method is not sensitive to strong environmental vibrations, and the positions of these 3D optical probes need not be rigidly-fixed during the measurement. The validity of the proposed method is verified in a physical experiment with two 3D optical probes. When the measuring range of raster binocular stereo vision sensor is about 0.5 m × 0.38 m × 0.4 m and the size of the measured object is about 0.7 m, the accuracy of the proposed method could reach 0.12 mm. Meanwhile, the effectiveness of the proposed method in dynamic measurement is confirmed by measuring the rotating fan blades. PMID:25835684

  9. Applied machine vision

    SciTech Connect

    Not Available

    1984-01-01

    This book presents the papers given at a conference on robot vision. Topics considered at the conference included the link between fixed and flexible automation, general applications of machine vision, the development of a specification for a machine vision system, machine vision technology, machine vision non-contact gaging, and vision in electronics manufacturing.

  10. Education and Public Outreach Programs for RHESSI and STEREO/IMPACT Missions

    NASA Astrophysics Data System (ADS)

    Craig, N.; Mendez, B. J.; Peticolas, L.

    2003-05-01

    We will present inquiry-based classroom activities for grades 8-12, as well as public outreach web-based resources featuring solar data, mathematics, and solar scientist interviews. The classroom activities are well aligned with National Science Education Standards. The inquiry-based resources "X-ray Candles: Solar Flares on Your Birthday," "SUNSPOTS" and "Discover Solar Cycle" will be highlighted. These activities allow students to discover the solar cycle by analyzing x-ray flare data and graphing the percentage of high energy flares over time. The RHESSI satellite mission scientists and a RHESSI EPO developed this activity. It was featured in the "Having a Solar Blast" episode of NASA Connect that was broadcast on NASA TV and PBS stations last spring. We will also present the various ways scientists from NASA's STEREO mission are contributing to the EPO program--through interviews incorporated in the high-visibility Eclipse 2001 webcast event, and through a STEREO website hosted by the Exploratorium. Measuring Magnetism, another inquiry-based classroom activity explaining the background science for STEREO, will be highlighted. We will also feature an exciting prototype program that involves converting the science results of solar energetic particle data to sound, and then a musician ultimately creates a composition inspired by these sounds as well as related solar images. Data from an earlier twin-spacecraft Mission, Helios1/2 (courtesy of D. Reames, GSFC and the Helios mission investigators) are used as a testbed for creating the stereo sounds from the future STEREO data. These resources are supported by RHESSI and STEREO EPO and the Science Education Gateway (SEGway) Project, a NASA SR&T (Supporting Research and Technology) Program.

  11. Present Vision--Future Vision.

    ERIC Educational Resources Information Center

    Fitterman, L. Jeffrey

    This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…

  12. Students' Research-Informed Socio-scientific Activism: Re/Visions for a Sustainable Future

    NASA Astrophysics Data System (ADS)

    Bencze, Larry; Sperling, Erin; Carter, Lyn

    2012-01-01

    In many educational contexts throughout the world, increasing focus has been placed on socio-scientific issues; that is, disagreements about potential personal, social and/or environmental problems associated with fields of science and technology. Some suggest (as do we) that many of these potential problems, such as those associated with climate change, are so serious that education needs to be oriented towards encouraging and enabling students to become citizen activists, ready and willing to take personal and social actions to reduce risks associated with the issues. Towards this outcome, teachers we studied encouraged and enabled students to direct open-ended primary (e.g., correlational studies), as well as secondary (e.g., internet searches), research as sources of motivation and direction for their activist projects. In this paper, we concluded, based on constant comparative analyses of qualitative data, that school students' tendencies towards socio-political activism appeared to depend on myriad, possibly interacting, factors. We focused, though, on curriculum policy statements, school culture, teacher characteristics and student-generated research findings. Our conclusions may be useful to those promoting education for sustainability, generally, and, more specifically, to those encouraging activism on such issues informed by student-led research.

  13. Integration of motion and stereo sensors in passive ranging systems

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Suorsa, Raymond

    1990-01-01

    A recursive approach is described for processing a sequence of stereo images. It will be the basis for an integrated stereo and motion method to provide more accurate range information using a passive ranging system. Results based on motion sequences of stereo images are presented. The approach is also applicable to other autonomous systems and in robotics.

  14. Photometric stereo sensor for robot-assisted industrial quality inspection of coated composite material surfaces

    NASA Astrophysics Data System (ADS)

    Weigl, Eva; Zambal, Sebastian; Stöger, Matthias; Eitzinger, Christian

    2015-04-01

    While composite materials are increasingly used in modern industry, the quality control in terms of vision-based surface inspection remains a challenging task. Due to the often complex and three-dimensional structures, a manual inspection of these components is nearly impossible. We present a photometric stereo sensor system including an industrial robotic arm for positioning the sensor relative to the inspected part. Two approaches are discussed: stop-and-go positioning and continuous positioning. Results are presented on typical defects that appear on various composite material surfaces in the production process.

  15. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  16. Pleiades Visions

    NASA Astrophysics Data System (ADS)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  17. Disparity fusion using depth and stereo cameras for accurate stereo correspondence

    NASA Astrophysics Data System (ADS)

    Jang, Woo-Seok; Ho, Yo-Sung

    2015-03-01

    Three-dimensional content (3D) creation has received a lot of attention due to numerous successes of 3D entertainment. Accurate stereo correspondence is necessary for efficient 3D content creation. In this paper, we propose a disparity map estimation method based on stereo correspondence. The proposed system utilizes depth and stereo camera sets. While the stereo set carries out disparity estimation, depth camera information is projected to left and right camera positions using 3D transformation and upsampling is processed in accordance with the image size. The upsampled depth is used for obtaining disparity data of left and right positions. Finally, disparity data from each depth sensor are combined. In order to evaluate the proposed method, we applied view synthesis from the acquired disparity map. The experimental results demonstrate that our method produces more accurate disparity maps compared to the conventional approaches which use the single depth sensors.

  18. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  19. Mono versus Stereo: Bilingualism's Double Face.

    ERIC Educational Resources Information Center

    Grutman, Rainier

    1993-01-01

    Offers an application of Mikhail Bakhtin's heteroglossia model, describing literature from a diversified point of view. Analyzes two examples to show nevertheless that Bakhtin unilaterally celebrates "stereo" qualities of language blending, and leaves no room for "mono" texts, which use polyglot devices as borders much more than as bridges between…

  20. STEREO Captures Fastest CME to Date

    NASA Video Gallery

    This movie shows a coronal mass ejection (CME) on the sun from July 22, 2012 at 10:00 PM EDT until 2 AM on July 23 as captured by NASA's Solar TErrestrial RElations Observatory-Ahead (STEREO-A). Be...

  1. Fixation by active accommodation

    NASA Astrophysics Data System (ADS)

    Pahlavan, Kourosh; Uhlin, Tomas; Eklundh, Jan-Olof

    1992-11-01

    The field of computer vision has long been interested in disparity as the cue for the correspondence between stereo images. The other cue to correspondence, blur, and the fact that vergence is a combination of the two processes, accommodative vergence and disparity vergence, have not been equally appreciated. Following the methodology of active vision that allows the observer to control all his visual parameters, it is quite natural to take advantage of the powerful combination of these two processes. In this article, we try to elucidate such an integration and briefly analyze the cooperation and competition between accommodative vergence and disparity vergence on one hand and disparity and blur stimuli on the other hand. The human fixation mechanism is used as a guide-line and some virtues of this mechanism are used to implement a model for vergence in isolation. Finally, some experimental results are reported.

  2. Characterizing the influence of surface roughness and inclination on 3D vision sensor performance

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Jackson, Michael R.

    2015-12-01

    This paper reports a methodology to evaluate the performance of 3D scanners, focusing on the influence of surface roughness and inclination on the number of acquired data points and measurement noise. Point clouds were captured of samples mounted on a robotic pan-tilt stage using an Ensenso active stereo 3D scanner. The samples have isotropic texture and range in surface roughness (Ra) from 0.09 to 0.46 μm. By extracting the point cloud quality indicators, point density and standard deviation, at a multitude of inclinations, maps of scanner performance are created. These maps highlight the performance envelopes of the sensor, the aim being to predict and compare scanner performance on real-world surfaces, rather than idealistic artifacts. The results highlight the need to characterize 3D vision sensors by their measurement limits as well as best-case performance, determined either by theoretical calculation or measurements in ideal circumstances.

  3. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  4. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  5. Toward generalized planetary stereo analysis scheme—Prototype implementation with multi-resolution Martian stereo imagery

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Rack; Lin, Shih-Yuan; Choi, Yun-Soo; Kim, Young-Hwi

    2013-07-01

    Stereo analysis of orbital imagery is highly valuable for scientific research in planetary surface. Thus, the processing of planetary stereo imagery has been progressed with various approaches and resulted in a series of uncontrolled topographic products. In order to fully utilize the data derived from image systems carried on various planetary orbiters, the generalized algorithms of stereo image processing and Digital Terrain Model (DTM) extraction have been developed. Based on Kim and Muller's approach (2009), the algorithms were updated employing the feed-forwarded model-based matcher and the generic sensor model. It is a sort of iterative stereo procedure delivering the reference data to next stage for 3D zoom-up. Thus the system is capable of processing various stereo data sets with the generic approach and achieves stable photogrammetric accuracy of resultant DTMs. To demonstrate the potential of this stereo processing routine, the DTMs obtained from various Mars orbital images covering some sample test sites were processed with the prototype processor. As the result, the processed DTMs clearly illustrated detailed geological features and high agreement with the height spots of Mars Obiter Laser Altimeter (MOLA). It was proved that the overall processing strategy in this paper was effective and the topographic products were accurate and reliable.

  6. GeoWall on the Cheap; Stereo Paper Maps in the Classroom and the Field

    NASA Astrophysics Data System (ADS)

    Campbell, K.; Morin, P. J.; Kirkby, K.

    2003-12-01

    An inexpensive version of GeoWall material has been developed for use in the field. Using freely available USGS DEM and DRG data and public domain software, local stereo topo maps can be easily created and printed on paper for use in the field without computers or projectors. We have used these maps with a broad group of K-12 students through university undergrads to help them learn to read a standard USGS 7.5 minute quadrangle. This enables students to develop a strong three-dimensional conception of their field area. Another important use of the maps is within an informal educational setting. The stereo maps easily convey topographic information to audiences with little or no Earth Science background or map reading skills. Not only are the maps instructive but they provide an engaging field or classroom activity that fosters enthusiasm for map reading. The technique for creating stereo maps will also be presented.

  7. Comprehensive STEREO Observations of the 2008 February 4 CME

    NASA Astrophysics Data System (ADS)

    Wood, B. E.; Howard, R. A.; Plunkett, S. P.; Socker, D. G.

    2008-12-01

    Thanks to the two Heliospheric Imagers that are part of STEREO's SECCHI instrument package, the two STEREO spacecraft are the first that are capable of following a CME continuously from the Sun all the way to 1 AU, where the PLASTIC and IMPACT instruments on the spacecraft can then also provide in situ information on the CME, assuming it hits one of the the two satellites. We present the first kinematic study of a CME that has been observed in such a comprehensive manner. The event begins on 2008 February 4 and is successfully tracked by STEREO-A to 1 AU where it hits STEREO-B on February 7. This is therefore a good example of STEREO's capability for one satellite (STEREO-A in this case) to observe a white-light CME front hitting the other satellite (STEREO-B in this case) at the same time as that second satellite is measuring the CME properties in situ.

  8. One-eyed stereo: a general approach to modeling 3-d scene geometry.

    PubMed

    Strat, T M; Fischler, M A

    1986-06-01

    A single two-dimensional image is an ambiguous representation of the three-dimensional world¿many different scenes could have produced the same image¿yet the human visual system is ex-tremely successful at recovering a qualitatively correct depth model from this type of representation. Workers in the field of computational vision have devised a number of distinct schemes that attempt to emulate this human capability; these schemes are collectively known as ``shape from...'' methods (e.g., shape from shading, shape from texture, or shape from contour). In this paper we contend that the distinct assumptions made in each of these schemes is tantamount to providing a second (virtual) image of the original scene, and that each of these approaches can be translated into a conventional stereo formalism. In particular, we show that it is frequently possible to structure the problem as one of recovering depth from a stereo pair consisting of the supplied perspective image (the original image) and an hypothesized orthographic image (the virtual image). We present a new algorithm of the form required to accomplish this type of stereo reconstruction task. PMID:21869368

  9. Computer vision for dual spacecraft proximity operations -- A feasibility study

    NASA Astrophysics Data System (ADS)

    Stich, Melanie Katherine

    A computer vision-based navigation feasibility study consisting of two navigation algorithms is presented to determine whether computer vision can be used to safely navigate a small semi-autonomous inspection satellite in proximity to the International Space Station. Using stereoscopic image-sensors and computer vision, the relative attitude determination and the relative distance determination algorithms estimate the inspection satellite's relative position in relation to its host spacecraft. An algorithm needed to calibrate the stereo camera system is presented, and this calibration method is discussed. These relative navigation algorithms are tested in NASA Johnson Space Center's simulation software, Engineering Dynamic On-board Ubiquitous Graphics (DOUG) Graphics for Exploration (EDGE), using a rendered model of the International Space Station to serve as the host spacecraft. Both vision-based algorithms proved to attain successful results, and the recommended future work is discussed.

  10. Activating a Vision

    ERIC Educational Resources Information Center

    Wilson, Carroll L.

    1973-01-01

    International Center of Insect Physiology and Ecology (ICIPE) is an organized effort to study physiology, endocrinology, genetics, and related processes of five insects. Location of the center in Kenya encourages developing countries to conduct research for the control of harmful insects. (PS)

  11. Stereoacuity of Preschool Children with and without Vision Disorders

    PubMed Central

    Ciner, Elise B.; Ying, Gui-shuang; Kulp, Marjean Taylor; Maguire, Maureen G.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Huang, Jiayan

    2014-01-01

    Purpose To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Methods Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study–targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study–targeted disorders overall and by age. Results Children with VIP Study–targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). Conclusions The presence of any VIP Study–targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder

  12. STEREO ICMEs and their Solar Source Regions Near Solar Minimum

    NASA Astrophysics Data System (ADS)

    Toy, V.; Li, Y.; Luhmann, J. G.; Schroeder, P.; Vourlidas, A.; Jian, L. K.; Russell, C. T.; Galvin, A. B.; Simunac, K.; Acuna, M.; Sauvaud, J. A.; Skoug, R.; Petrie, G.

    2008-12-01

    Although the quiet activity period surrounding the current solar minimum has prevailed since the launch of STEREO in October 2006, there have been at least 9 clear in-situ detections of ICMEs (Interplanetary Coronal Mass Ejections) by one or more spacecraft during the time the imagers were also operating. These observations provide unusually complete data sets for evaluating helio-longitude extent of the ICMEs and for identifying the probable solar cause(s) of the events. In this poster we present information on these ICMEs from the IMPACT and PLASTIC and ACE in-situ investigations, together with solar images from STEREO and SOHO that seem to capture the causative activity at the Sun. We find that even though the Sun was very quiet in '07-'08, with few active regions visible in GONG and SOHO magnetograms, there were numerous CME candidates that erupted through the near-equatorial helmet streamers. Most events could be identified with EUV disk activity as well as a coronagraph CME, even if the associated active region was very small or weak. Old cycle active regions, new and decayed, continued to maintain a warp in the large-scale helmet streamer belt that was a frequent site of the eruptions. However, the warp in the streamer belt may simply indicate that the active region(s) present is(are) sufficiently strong to affect the large scale quiet coronal field structure. Overall we see no gross differences between the solar activity and ICME causes during this and the previous solar activity minimum, when the Streamer belt was less warped due to the existence of stronger solar polar fields.

  13. Intelligent robots and computer vision; Proceedings of the Meeting, Cambridge, MA, Nov. 2-6, 1987

    SciTech Connect

    Casasent, D.P.; Hall, E.L.

    1988-01-01

    Topics discussed include pattern recognition, image processing, sensors, model-based object recognition, image understanding, artificial neural systems, and three-dimensional object recognition. Consideration is also given to stereo image processing, optical flow, intelligent control, vision-aided automated control systems, architectures and software, and industrial applications.

  14. Pediatric Low Vision

    MedlinePlus

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  15. Vision Therapy News Backgrounder.

    ERIC Educational Resources Information Center

    American Optometric Association, St. Louis, MO.

    The booklet provides an overview on vision therapy to aid writers, editors, and broadcasters help parents, teachers, older adults, and all consumers learn more about vision therapy. Following a description of vision therapy or vision training, information is provided on how and why vision therapy works. Additional sections address providers of…

  16. Augmented reality to enhance an active telepresence system

    NASA Astrophysics Data System (ADS)

    Wheeler, Alison; Pretlove, John R. G.; Parker, Graham A.

    1996-12-01

    Tasks carried out remotely via a telerobotic system are typically complex, occur in hazardous environments and require fine control of the robot's movements. Telepresence systems provide the teleoperator with a feeling of being physically present at the remote site. Stereoscopic video has been successfully applied to telepresence vision systems to increase the operator's perception of depth in the remote scene and this sense of presence can be further enhanced using computer generated stereo graphics to augment the visual information presented to the operator. The Mechatronic Systems and Robotics Research Group have over seven years developed a number of high performance active stereo vision systems culminating in the latest, a four degree-of-freedom stereohead. This carries two miniature color cameras and is controlled in real time by the motion of the operator's head, who views the stereoscopic video images on an immersive head mounted display or stereo monitor. The stereohead is mounted on a mobile robot, the movement of which is controlled by a joystick interface. This paper describes the active telepresence system and the development of a prototype augmented reality (AR) application to enhance the operator's sense of presence at the remote site. The initial enhancements are a virtual map and compass to aid navigation in degraded visual conditions and a virtual cursor that provides a means for the operator to interact with the remote environment. The results of preliminary experiments using the initial enhancements are presented.

  17. Weighted directional energy model of human stereo correspondence.

    PubMed

    Prince, S J; Eagle, R A

    2000-01-01

    Previous work [Prince, S. J. D, & Eagle, R. A. (1999). Size-disparity correlation in human binocular depth perception. Proceedings of the Royal Society: Biological Sciences, 266, 1361-1365] has demonstrated that disparity sign discrimination performance in isolated bandpass patterns is supported at disparities much larger than a phase disparity model might predict. One possibility is that this extended performance relies on a separate second-order system [Hess, R. F., & Wilcox, L. M. (1994). Linear and non-linear filtering in stereopsis. Vision Research, 34, 2431-2438]. Here, a 'weighted directional energy' model is developed which explains a large body of crossed versus uncrossed disparity discrimination data with a single mechanism. This model assumes a population of binocular complex cells at every image point with a range of position disparity shifts. These cells sample a local energy function which is weighted so that energy at large disparities is relatively attenuated. Disparity sign is determined by summing and comparing energy at crossed and uncrossed disparities in the presence of noise. The model qualitatively predicts matching data for one-dimensional Gabor stimuli. This scheme also predicts DMax in Gabor stimuli and filtered noise. Moreover, a range of 'non-linear' phenomena, in which disparity is perceived from contrast envelope information alone, can be explained. The weighted directional energy model presents a biologically plausible, parsimonious explanation of matching behaviour in bandpass stimuli for both 'first-order' and 'second-order' stimuli which obviates the need for multiple mechanisms in stereo correspondence. PMID:10738073

  18. Automatic defect classification using topography map from SEM photometric stereo

    NASA Astrophysics Data System (ADS)

    Serulnik, Sergio D.; Cohen, Jacob; Sherman, Boris; Ben-Porath, Ariel

    2004-04-01

    As the industry moves to smaller design rules, shrinking process windows and shorter product lifecycles, the need for enhanced yield management methodology is increasing. Defect classification is required for identification and isolation of yield loss sources. Practice demonstrates that an operator relies on 3D information heavily while classifying defects. Therefore, Defect Topographic Map (DTM) information can enhance Automatic Defect Classification (ADC) capabilities dramatically. In the present article, we describe the manner in which reliable and rapid SEM measurements of defect topography characteristics increase the classifier ability to achieve fast identification of the exact process step at which a given defect was introduced. Special multiple perspective SEM imaging allows efficient application of the photometric stereo methods. Physical properties of a defect can be derived from the 3D by using straightforward computer vision algorithms. We will show several examples, from both production fabs and R&D lines, of instances where the depth map is essential in correctly partitioning the defects, thus reducing time to source and overall fab expenses due to defect excursions.

  19. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  20. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    NASA Astrophysics Data System (ADS)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  1. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  2. The World Water Vision: From Developing a Vision to Action

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, S.; Cosgrove, W.; Rijsberman, F.; Strzepek, K.; Strzepek, K.

    2001-05-01

    The World Water Vision exercise was initiated by the World Water Commission under the auspices of the World Water Council. The goal of the World Water Vision project was to develop a widely shared vision on the actions required to achieve a common set of water-related goals and the necessary commitment to carry out these actions. The Vision should be participatory in nature, including input from both developed and developing regions, with a special focus on the needs of the poor, women, youth, children and the environment. Three overall objectives were to: (i)raise awareness of water issues among both the general population and decision-makers so as to foster the necessary political will and leadership to tackle the problems seriously and systematically; (ii) develop a vision of water management for 2025 that is shared by water sector specialists as well as international, national and regional decision-makers in government, the private sector and civil society; and (iii) provide input to a Framework for Action to be elaborated by the Global Water Partnership, with steps to go from vision to action, including recommendations to funding agencies for investment priorities. This exercise was characterized by the principles of: (i) a participatory approach with extensive consultation; (ii) Innovative thinking; (iii) central analysis to assure integration and co-ordination; and (iv) emphasis on communication with groups outside the water sector. The primary activities included, developing global water scenarios that fed into regional consultations and sectoral consultations as water for food, water for people - water supply and sanitation, and water and environment. These consultations formulated the regional and sectoral visions that were synthesized to form the World Water Vision. The findings from this exercise were reported and debated at the Second World Water Forum and the Ministerial Conference held in The Hague, The Netherlands during April 2000. This paper

  3. Human gene therapy for RPE65 isomerase deficiency activates the retinoid cycle of vision but with slow rod kinetics

    PubMed Central

    Cideciyan, Artur V.; Aleman, Tomas S.; Boye, Sanford L.; Schwartz, Sharon B.; Kaushal, Shalesh; Roman, Alejandro J.; Pang, Ji-jing; Sumaroka, Alexander; Windsor, Elizabeth A. M.; Wilson, James M.; Flotte, Terence R.; Fishman, Gerald A.; Heon, Elise; Stone, Edwin M.; Byrne, Barry J.; Jacobson, Samuel G.; Hauswirth, William W.

    2008-01-01

    The RPE65 gene encodes the isomerase of the retinoid cycle, the enzymatic pathway that underlies mammalian vision. Mutations in RPE65 disrupt the retinoid cycle and cause a congenital human blindness known as Leber congenital amaurosis (LCA). We used adeno-associated virus-2-based RPE65 gene replacement therapy to treat three young adults with RPE65-LCA and measured their vision before and up to 90 days after the intervention. All three patients showed a statistically significant increase in visual sensitivity at 30 days after treatment localized to retinal areas that had received the vector. There were no changes in the effect between 30 and 90 days. Both cone- and rod-photoreceptor-based vision could be demonstrated in treated areas. For cones, there were increases of up to 1.7 log units (i.e., 50 fold); and for rods, there were gains of up to 4.8 log units (i.e., 63,000 fold). To assess what fraction of full vision potential was restored by gene therapy, we related the degree of light sensitivity to the level of remaining photoreceptors within the treatment area. We found that the intervention could overcome nearly all of the loss of light sensitivity resulting from the biochemical blockade. However, this reconstituted retinoid cycle was not completely normal. Resensitization kinetics of the newly treated rods were remarkably slow and required 8 h or more for the attainment of full sensitivity, compared with <1 h in normal eyes. Cone-sensitivity recovery time was rapid. These results demonstrate dramatic, albeit imperfect, recovery of rod- and cone-photoreceptor-based vision after RPE65 gene therapy. PMID:18809924

  4. Optimization of semi-global stereo matching for hardware module implementation

    NASA Astrophysics Data System (ADS)

    Roszkowski, Mikołaj

    2014-11-01

    Stereo vision is one of the most intensively studied areas in the field of computer vision. It allows the creation of a 3D model of a scene given two images of the scene taken with optical cameras. Although the number of stereo algorithms keeps increasing, not many are suitable candidates for hardware implementations that could guarantee real-time processing in embedded systems. One of such algorithms is semi-global matching, which seems to balance well the quality of the disparity map and computational complexity. However, it still has quite high memory requirements, which can be a problem if the low-cost FPGAs are to be used. This is because they often suffer from a low external DRAM memory throughput. In this article, a few methods to reduce both the semi-global matching algorithm complexity and memory usage, and thus required bandwidth, are proposed. First of all, it is shown that a simple pyramid matching scheme can be used to efficiently reduce the number of disparities checked per pixel. Secondly, a method of dividing the image into independent blocks is proposed, which allows the reduction of the amount of memories required by the algorithm. Finally the exact requirements for the bandwidth and the size of the on-chip memories are given.

  5. Edge-pixel-based stereo correspondence through ordering-oriented neural networks

    NASA Astrophysics Data System (ADS)

    Siy, Pepe; Hu, Joe-E.

    1993-09-01

    This paper describes a fast and robust artificial neural network algorithm for solving the stereo correspondence problem in binocular vision. In this algorithm, the stereo correspondence problem is modelled as a cost minimization problem where the cost is the value of matching function between the edge pixels along the same epipolar line. A multiple-constraint energy minimization neural network is implemented for this matching process. This algorithm differs from previous works in that it integrates ordering, and geometry constraints in addition to uniqueness, continuity, and epipolar line constraint into a neural network implementation. The processing procedures are similar to that of human vision process. The edge pixels are divided into different clusters according to their orientation and contrast polarity. The matching is performed only between the edge pixels in the same clusters and at the same epipolar line. By following the epipolar line, the ordering constraint (the left-right relation between pixels) can be specified easily without building extra relational graph as in the earlier works. The algorithm thus assigns artificial neurons which follow the same order of the pixels along an epipolar line to represent the matching candidate pairs.

  6. Multiple-constraints neural network solution for edge-pixel-based stereo correspondence problem

    NASA Astrophysics Data System (ADS)

    Hu, Joe-E.; Siy, Pepe

    1993-03-01

    This paper describes a fast and robust artificial neural network algorithm for solving the stereo correspondence problem in binocular vision. In this algorithm, the stereo correspondence problem is modelled as a cost minimization problem where the cost is the value of the matching function between the edge pixels along the same epipolar line. A multiple-constraint energy minimization neural network is implemented for this matching process. This algorithm differs from previous works in that it integrates ordering and geometry constraints in addition to uniqueness, continuity, and epipolar line constraint into a neural network implementation. The processing procedures are similar to that of the human vision processes. The edge pixels are divided into different clusters according to their orientation and contrast polarity. The matching is performed only between the edge pixels in the same clusters and at the same epipolar line. By following the epipolar line, the ordering constraint (the left-right relation between pixels) can be specified easily without building extra relational graphs as in the earlier works. The algorithm thus assigns artificial neurons which follow the same order of the pixels along an epipolar line to represent the matching candidate pairs. The algorithm is discussed in detail and experimental results using real images are presented.

  7. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  8. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  9. Theoretical modeling for the stereo mission

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.; Burlaga, L. F.; Kaiser, M. L.; Ng, C. K.; Reames, D. V.; Reiner, M. J.; Gombosi, T. I.; Lugaz, N.; Manchester, W.; Roussev, I. I.; Zurbuchen, T. H.; Farrugia, C. J.; Galvin, A. B.; Lee, M. A.; Linker, J. A.; Mikić, Z.; Riley, P.; Alexander, D.; Sandman, A. W.; Cook, J. W.; Howard, R. A.; Odstrčil, D.; Pizzo, V. J.; Kóta, J.; Liewer, P. C.; Luhmann, J. G.; Inhester, B.; Schwenn, R. W.; Solanki, S. K.; Vasyliunas, V. M.; Wiegelmann, T.; Blush, L.; Bochsler, P.; Cairns, I. H.; Robinson, P. A.; Bothmer, V.; Kecskemety, K.; Llebaria, A.; Maksimovic, M.; Scholer, M.; Wimmer-Schweingruber, R. F.

    2008-04-01

    We summarize the theory and modeling efforts for the STEREO mission, which will be used to interpret the data of both the remote-sensing (SECCHI, SWAVES) and in-situ instruments (IMPACT, PLASTIC). The modeling includes the coronal plasma, in both open and closed magnetic structures, and the solar wind and its expansion outwards from the Sun, which defines the heliosphere. Particular emphasis is given to modeling of dynamic phenomena associated with the initiation and propagation of coronal mass ejections (CMEs). The modeling of the CME initiation includes magnetic shearing, kink instability, filament eruption, and magnetic reconnection in the flaring lower corona. The modeling of CME propagation entails interplanetary shocks, interplanetary particle beams, solar energetic particles (SEPs), geoeffective connections, and space weather. This review describes mostly existing models of groups that have committed their work to the STEREO mission, but is by no means exhaustive or comprehensive regarding alternative theoretical approaches.

  10. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  11. Solar wind observations at STEREO: 2007 - 2011

    NASA Astrophysics Data System (ADS)

    Jian, L. K.; Russell, C. T.; Luhmann, J. G.; Galvin, A. B.; Simunac, K. D. C.

    2013-06-01

    We have observed the solar wind extensively using the twin STEREO spacecraft in 2007 - 2011, covering the deep solar minimum 23/24 and the rising phase of solar cycle 24. Hundreds of large-scale solar wind structures have been surveyed, including stream interaction regions (SIRs), interplanetary CMEs (ICMEs), and interplanetary shocks. The difference in location can cause one STEREO spacecraft to encounter 1/3 more of the above structures than the other spacecraft in a single year, even of the quasi-steady SIRs. In contrast with the rising phase of cycle 23, SIRs and ICMEs have weaker field and pressure compression in this rising phase, and ICMEs drive fewer shocks. Although the majority of shocks are driven by SIRs and ICMEs, we find ~13% of shocks without clear drivers observed in situ.

  12. Impairments to Vision

    MedlinePlus

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  13. Retinal Detachment Vision Simulator

    MedlinePlus

    ... Retina Treatment Retinal Detachment Vision Simulator Retinal Detachment Vision Simulator Mar. 01, 2016 How does a detached or torn retina affect your vision? If a retinal tear is occurring, you may ...

  14. Signatures of interchange reconnection: STEREO, ACE and Hinode observations combined

    NASA Astrophysics Data System (ADS)

    Baker, D.; Rouillard, A. P.; van Driel-Gesztelyi, L.; Démoulin, P.; Harra, L. K.; Lavraud, B.; Davies, J. A.; Opitz, A.; Luhmann, J. G.; Sauvaud, J.-A.; Galvin, A. B.

    2009-10-01

    Combining STEREO, ACE and Hinode observations has presented an opportunity to follow a filament eruption and coronal mass ejection (CME) on 17 October 2007 from an active region (AR) inside a coronal hole (CH) into the heliosphere. This particular combination of "open" and closed magnetic topologies provides an ideal scenario for interchange reconnection to take place. With Hinode and STEREO data we were able to identify the emergence time and type of structure seen in the in-situ data four days later. On the 21st, ACE observed in-situ the passage of an ICME with "open" magnetic topology. The magnetic field configuration of the source, a mature AR located inside an equatorial CH, has important implications for the solar and interplanetary signatures of the eruption. We interpret the formation of an "anemone" structure of the erupting AR and the passage in-situ of the ICME being disconnected at one leg, as manifested by uni-directional suprathermal electron flux in the ICME, to be a direct result of interchange reconnection between closed loops of the CME originating from the AR and "open" field lines of the surrounding CH.

  15. Revisiting Intrinsic Curves for Efficient Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sohn, G.; Théau, J.; Ménard, P.

    2016-06-01

    Dense stereo matching is one of the fundamental and active areas of photogrammetry. The increasing image resolution of digital cameras as well as the growing interest in unconventional imaging, e.g. unmanned aerial imagery, has exposed stereo image pairs to serious occlusion, noise and matching ambiguity. This has also resulted in an increase in the range of disparity values that should be considered for matching. Therefore, conventional methods of dense matching need to be revised to achieve higher levels of efficiency and accuracy. In this paper, we present an algorithm that uses the concepts of intrinsic curves to propose sparse disparity hypotheses for each pixel. Then, the hypotheses are propagated to adjoining pixels by label-set enlargement based on the proximity in the space of intrinsic curves. The same concepts are applied to model occlusions explicitly via a regularization term in the energy function. Finally, a global optimization stage is performed using belief-propagation to assign one of the disparity hypotheses to each pixel. By searching only through a small fraction of the whole disparity search space and handling occlusions and ambiguities, the proposed framework could achieve high levels of accuracy and efficiency.

  16. Binocular Vision

    PubMed Central

    Blake, Randolph; Wilson, Hugh

    2010-01-01

    This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722

  17. Vision Screening

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  18. Robot Vision

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  19. Opportunity's Surroundings on Sol 1798 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  20. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803

    NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009).

    By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  1. Opportunity's Surroundings on Sol 1687 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses.

    Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction.

    Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast.

    This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  2. Stereo cameras on the International Space Station

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Visentin, Gianfranco; Collon, Max; Ranebo, Hans; Sunderland, David; Fortezza, Raimondo

    2007-02-01

    Three-dimensional media is a unique and efficient means to virtually visit/observe objects that cannot be easily reached otherwise, like the International Space Station. The advent of auto-stereoscopic displays and stereo projection system is making the stereo media available to larger audiences than the traditional scientists and design engineers communities. It is foreseen that a major demand for 3D content shall come from the entertainment area. Taking advantage of the 6 months long permanence on the International Space Station of a colleague European Astronaut, Thomas Reiter, the Erasmus Centre uploaded to the ISS a newly developed, fully digital stereo camera, the Erasmus Recording Binocular. Testing the camera and its human interfaces in weightlessness, as well as accurately mapping the interior of the ISS are the main objectives of the experiment that has just been completed at the time of writing. The intent of this paper is to share with the readers the design challenges tackled in the development and operation of the ERB camera and highlight some of the future plans the Erasmus Centre team has in the pipeline.

  3. Low Vision Aids and Low Vision Rehabilitation

    MedlinePlus

    ... The future will offer even more solutions. Newer technology for low vision aids While low vision devices ... magnifiers have long been the standard in assistive technology, advances in consumer electronics are also improving quality ...

  4. Model for optimal parallax in stereo radar imagery

    NASA Technical Reports Server (NTRS)

    Pisaruck, M. A.; Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.

    1984-01-01

    Simulated stereo radar imagery is used to investigate parameters for a spaceborne imaging radar. Incidence angles ranging from small to intermediate to large are used with three digital terrain model areas which are representative of relatively flat, moderately rough, and mountaneous terrain. The simulated radar imagery was evaluated by interpreters for ease of stereo perception and information content, and rank ordered within each class of terrain. The interpreter's results are analyzed for trends between the height of a feature and either parallax or vertical exaggeration for a stereo pair. A model is developed which predicts the amount of parallax (or vertical exaggeration) an interpreter would desire for best stereo perception of a feature of a specific height. Results indicate the selection of angle of incidence and stereo intersection angle depend upon the relief of the terrain. Examples of the simulated stereo imagery are presented for a candidate spaceborne imaging radar having four selectable angles of incidence.

  5. Comparison of motion and stereo methods in passive ranging systems

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Suorsa, Raymond

    1991-01-01

    The authors compare the estimates in passive ranging systems using motion and stereo approaches. It is shown that an integrated approach is necessary to provide better range estimates over a field-of-view (FOV) of interest in helicopter flight. The recursive approach for processing a sequence of stereo images, described together with a recursive motion algorithm (RMA), provides the basis for an integrated method to provide more accurate range information. Results based on motion sequences of stereo images are presented.

  6. Quasi-microscope concept for planetary missions - Stereo

    NASA Technical Reports Server (NTRS)

    Burcher, E. E.; Sinclair, A. R.; Huck, F. O.

    1978-01-01

    The quasi-microscope has been used for stereo pictures using a small aperture placed at the right and left over the entrance aperture. A 16-degree stereo view angle yields an enhanced stereo effect. When viewed upward through a transparent support, all grains come into focus. The technique may be used for determining mineral constituents on the basis of cleavage and fracture patterns, grain details, and surface slopes used in estimating single-particle albedo and the illumination of scattering profiles.

  7. PHOTOCOPY OF EARLY STEREO VIEW OF CARPENTERS' HALL. Date and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PHOTOCOPY OF EARLY STEREO VIEW OF CARPENTERS' HALL. Date and photographer unknown. Original in Carpenters' Hall - Carpenters' Company Hall, 320 Chestnut Street & Carpenters' Court, Philadelphia, Philadelphia County, PA

  8. Current state of the art of vision based SLAM

    NASA Astrophysics Data System (ADS)

    Muhammad, Naveed; Fofi, David; Ainouz, Samia

    2009-02-01

    The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.

  9. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  10. Intertwining Of Teleoperation And Computer Vision

    NASA Astrophysics Data System (ADS)

    Bloom, B. C.; Duane, G. S.; Epstein, M. A.; Magee, M.; Mathis, D. W.; Nathan, M. J.; Wolfe, W. J.

    1987-01-01

    In the rapid pursuit of automation, it is sometimes overlooked that an elaborate human-machine interplay is still necessary, despite the fact that a fully automated system, by definition, would not require a human interface. In the future, real-time sensing, intelligent processing, and dextrous manipulation will become more viable, but until then it is necessary to use humans for many critical processes. It is not obvious, however, how automated subsystems could account for human intervention, especially if a philosophy of "pure" automation dominates the design. Teleoperation, by contrast, emphasizes the creation of hardware pathways (e.g., hand-controllers, exoskeletons) to quickly communicate low-level control data to various mechanisms, while providing sensory feedback in a format suitable for human consumption (e.g., stereo displays, force reflection), leaving the "intelligence" to the human. These differences in design strategy, both hardware and software, make it difficult to tie automation and teleoperation together, while allowing for graceful transitions at the appropriate times. In no area of artifical intelligence is this problem more evident than in computer vision. Teleoperation typically uses video displays (monochrome/color, monoscopic/ stereo) with contrast enhancement and gain control without any digital processing of the images. However, increases in system performance such as automatic collision avoidance, path finding, and object recognition depend on computer vision techniques. Basically, computer vision relies on the digital processing of the images to extract low-level primitives such as boundaries and regions that are used in higher-level processes for object recognition and positions. Real-time processing of complex environments is currently unattainable, but there are many aspects of the processing that are useful for situation assessment, provided it is understood the human can assist in the more time-consuming steps.

  11. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  12. International computer vision directory

    SciTech Connect

    Flora, P.C.

    1986-01-01

    This book contains information on: computerized automation technologies. State-of-the-art computer vision systems for many areas of industrial use are covered. Other topics discussed include the following automated inspection systems; robot/vision systems; vision process control; cameras (vidicon and solid state); vision peripherals and components; and pattern processor.

  13. Synchronized observations by using the STEREO and the largest ground-based decametre radio telescope

    NASA Astrophysics Data System (ADS)

    Konovalenko, A. A.; Stanislavsky, A. A.; Rucker, H. O.; Lecacheux, A.; Mann, G.; Bougeret, J.-L.; Kaiser, M. L.; Briand, C.; Zarka, P.; Abranin, E. P.; Dorovsky, V. V.; Koval, A. A.; Mel'nik, V. N.; Mukha, D. V.; Panchenko, M.

    2013-08-01

    We consider the approach to simultaneous (synchronous) solar observations of radio emission by using the STEREO-WAVES instruments (frequency range 0.125-16 MHz) and the largest ground-based low-frequency radio telescope. We illustrate it by the UTR-2 radio telescope implementation (10-30 MHz). The antenna system of the radio telescope is a T-shape-like array of broadband dipoles and is located near the village Grakovo in the Kharkiv region (Ukraine). The third observation point on the ground in addition to two space-based ones improves the space-mission performance capabilities for the determination of radio-emission source directivity. The observational results from the high sensitivity antenna UTR-2 are particularly useful for analysis of STEREO data in the condition of weak event appearances during solar activity minima. In order to improve the accuracy of flux density measurements, we also provide simultaneous observations with a large part of the UTR-2 radio telescope array and its single dipole close to the STEREO-WAVES antennas in sensitivity. This concept has been studied by comparing the STEREO data with ground-based records from 2007-2011 and shown to be effective. The capabilities will be useful in the implementation of new instruments (LOFAR, LWA, MWA, etc.) and during the future Solar Orbiter mission.

  14. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  15. FM Stereo and AM Stereo: Government Standard-Setting vs. the Marketplace.

    ERIC Educational Resources Information Center

    Huff, W. A. Kelly

    The emergence of frequency modulation or FM radio signals, which arose from the desire to free broadcasting of static noise common to amplitude modulation or AM, has produced the controversial development of stereo broadcasting. The resulting enhancement of sound quality helped FM pass AM in audience shares in less than two decades. The basic…

  16. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  17. Small Orbital Stereo Tracking Camera Technology Development

    NASA Astrophysics Data System (ADS)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  18. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  19. 'Victoria' After Sol 950 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08778

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08778 [figure removed for brevity, see original site] Cylindrical view for PIA08778

    A drive of about 30 meters (about 100 feet) on the 950th Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 25, 2006) brought the NASA rover to within about 20 meters (about 66 feet) of the rim of 'Victoria Crater.' From that position, the rover's navigation camera took the exposures combined into this stereo anaglyph, which appears three-dimensional when viewed through red-green glasses. The scalloped shape of the crater is visible on the left edge. Due to a small dune or ripple close to the nearest part of the rim, the scientists and engineers on the rover team planned on sol 951 to drive to the right of the ripple, but not quite all the way to the rim, then to proceed to the rim the following sol. The image is presented in cylindrical projection with geometric seam correction.

    Victoria Crater is about 800 meters (one-half mile) in diameter, about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  20. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  1. On the architecture of the micro machine vision system

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Wang, Xiaohao; Zhou, Zhaoying; Zong, Guanghua

    2006-01-01

    Micro machine vision system is an important part of a micromanipulating system which has been used widely in many fields. As the research activities on the micromanipulating system go deeper, micro machine vision system catches more attention. In this paper, micro machine vision system is treated as a kind of machine vision system with constrains and characteristics introduced by specific application environment. Unlike the traditional machine vision system, a micro machine vision system usually does not aim at the reconstruction of the scene. It is introduced to obtain expected position information so that the manipulation can be accomplished accurately. The architecture of the micro machine vision system is proposed. The key issues related to a micro machine vision system such as system layout, optical imaging device and vision system calibration are discussed to explain the proposed architecture further. A task-oriented micro machine vision system for biological micromanipulating system is shown as an example, which is in compliance with the proposed architecture.

  2. Dramatic Improvements to Feature Based Stereo

    NASA Technical Reports Server (NTRS)

    Smelyansky, V. N.; Morris, R. D.; Kuehnel, F. O.; Maluf, D. A.; Cheeseman, P.

    2004-01-01

    The camera registration extracted from feature based stereo is usually considered sufficient to accurately localize the 3D points. However, for natural scenes the feature localization is not as precise as in man-made environments. This results in small camera registration errors. We show that even very small registration errors result in large errors in dense surface reconstruction. We describe a method for registering entire images to the inaccurate surface model. This gives small, but crucially important improvements to the camera parameters. The new registration gives dramatically better dense surface reconstruction.

  3. STEREO's Extreme UltraViolet Imager (EUVI)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    At a pixel resolution of 2048x2048, the STEREO EUVI instrument provides views of the Sun in ultraviolet light that rivals the full-disk views of SOHO/EIT. This image is through the 171 Angstrom (ultraviolet) filter which is characteristic of iron ions (missing eight and nine electrons) at 1 million degrees. There is a short data gap in the latter half of the movie that creates a freeze and then jump in the data view. This is a movie of the Sun in 171 Angstrom ultraviolet light. The time frame is late January, 2007

  4. Surface Stereo Imager on Mars, Side View

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  5. Venus surface roughness and Magellan stereo data

    NASA Technical Reports Server (NTRS)

    Maurice, Kelly E.; Leberl, Franz W.; Norikane, L.; Hensley, Scott

    1994-01-01

    Presented are results of some studies to develop tools useful for the analysis of Venus surface shape and its roughness. Actual work was focused on Maxwell Montes. The analyses employ data acquired by means of NASA's Magellan satellite. The work is primarily concerned with deriving measurements of the Venusian surface using Magellan stereo SAR. Roughness was considered by means of a theoretical analyses based on digital elevation models (DEM's), on single Magellan radar images combined with radiometer data, and on the use of multiple overlapping Magellan radar images from cycles 1, 2, and 3, again combined with collateral radiometer data.

  6. Developing stereo image based robot control system

    SciTech Connect

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W.

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  7. (Computer vision and robotics)

    SciTech Connect

    Jones, J.P.

    1989-02-13

    The traveler attended the Fourth Aalborg International Symposium on Computer Vision at Aalborg University, Aalborg, Denmark. The traveler presented three invited lectures entitled, Concurrent Computer Vision on a Hypercube Multicomputer'', The Butterfly Accumulator and its Application in Concurrent Computer Vision on Hypercube Multicomputers'', and Concurrency in Mobile Robotics at ORNL'', and a ten-minute editorial entitled, It Concurrency an Issue in Computer Vision.'' The traveler obtained information on current R D efforts elsewhere in concurrent computer vision.

  8. CAD-model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  9. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  10. Stereo Pair: Wellington, New Zealand

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Wellington, the capital city of New Zealand, is located on the shores of Port Nicholson, a natural harbor at the south end of North Island. The city was founded in 1840 by British emigrants and now has a regional population of more than 400,000 residents. As seen here, the natural terrain imposes strong control over the urban growth pattern (urban features generally appear gray or white in this view). Rugged hills generally rising to 300 meters (1,000 feet) help protect the city and harbor from strong winter winds

    New Zealand is seismically active and faults are readily seen in the topography. The Wellington Fault forms the straight northwestern (left) shoreline of the harbor. Toward the southwest (down) the fault crosses through the city, then forms linear canyons in the hills before continuing offshore at the bottom. Toward the northeast (upper right) the fault forms the sharp mountain front along the northern edge of the heavily populated Hutt Valley.

    This stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with an enhanced true color Landsat7 satellite image. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. In doing so, each point in the image is shifted slightly, depending on its elevation. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.

    Landsat satellites have provided visible light and infrared images of the Earth continuously since 1972. SRTM topographic data match the 30 meter (99 foot) spatial resolution of most Landsat images and will provide a valuable complement for studying the historic and growing Landsat data archive. The Landsat 7 Thematic Mapper image used here was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) data Center, Sioux Falls, South Dakota.

    Elevation data

  11. SRTM Stereo Pair: Fiji Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Sovereign Democratic Republic of the Fiji Islands, commonly known as Fiji, is an independent nation consisting of some 332 islands surrounding the Koro Sea in the South Pacific Ocean. This topographic image shows Viti Levu, the largest island in the group. With an area of 10,429 square kilometers (about 4000 square miles), it comprises more than half the area of the Fiji Islands. Suva, the capital city, lies on the southeast shore. The Nakauvadra, the rugged mountain range running from north to south, has several peaks rising above 900 meters (about 3000 feet). Mount Tomanivi, in the upper center, is the highest peak at 1324 meters (4341 feet). The distinct circular feature on the north shore is the Tavua Caldera, the remnant of a large shield volcano that was active about 4 million years ago. Gold has been mined on the margin of the caldera since the 1930s. The Nadrau plateau is the low relief highland in the center of the mountain range. The coastal plains in the west, northwest and southeast account for only 15 percent of Viti Levu's area but are the main centers of agriculture and settlement.

    This stereoscopic view was generated using preliminary topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data from the top (north) to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. Also, colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to pink at the highest elevations. This image contains about 1300 meters (4300 feet) of total relief. The stereoscopic effect was created by first draping the shading and colors back over the topographic data and then generating two differing perspectives, one for each eye. The 3-D perception is achieved by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the

  12. Expert system modeling of a vision system

    NASA Astrophysics Data System (ADS)

    Reihani, Kamran; Thompson, Wiley E.

    1992-05-01

    The proposed artificial intelligence-based vision model incorporates natural recognition processes depicted as a visual pyramid and hierarchical representation of objects in the database. The visual pyramid, with based and apex representing pixels and image, respectively, is used as an analogy for a vision system. This paper provides an overview of recognition activities and states in the framework of an inductive model. Also, it presents a natural vision system and a counterpart expert system model that incorporates the described operations.

  13. A common interface for stereo viewing in various environments

    NASA Astrophysics Data System (ADS)

    Pariser, Oleg; Deen, Robert G.

    2009-02-01

    This paper presents a graphical software infrastructure for stereo display. It enables the development of low-cost/short development cycle stereo applications that are portable - not only across platforms, but across display types as well. Moreover, it allows not just images but entire GUI's (Graphics User Interface) to be displayed in stereo consistently across many platforms. Java Advanced Display Infrastructure for Stereo (JADIS) provides a common interface for displaying GUI components in stereo using either specialized stereo display hardware (e.g. liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard computer displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience (anaglyphs) without sacrificing high-quality display on dedicated hardware. JADIS has been released as Open Source and is available via the Open Channel foundation website[1]. It has been integrated into several applications for stereo viewing and processing of data acquired by current and future NASA Mars surface missions (e.g. Mars Exploration Rover (MER), Phoenix Lander, Mars Science Laboratory (MSL)).

  14. Unexpected spatial intensity distributions and onset timing of solar electron events observed by closely spaced STEREO spacecraft

    NASA Astrophysics Data System (ADS)

    Klassen, A.; Dresing, N.; Gómez-Herrero, R.; Heber, B.; Müller-Mellin, R.

    2016-09-01

    We present multi-spacecraft observations of four solar electron events using measurements from the Solar Electron Proton Telescope (SEPT) and the Electron Proton Helium INstrument (EPHIN) on board the STEREO and SOHO spacecraft, respectively, occurring between 11 October 2013 and 1 August 2014, during the approaching superior conjunction period of the two STEREO spacecraft. At this time the longitudinal separation angle between STEREO-A (STA) and STEREO-B (STB) was less than 72°. The parent particle sources (flares) of the four investigated events were situated close to, in between, or to the west of the STEREO's magnetic footpoints. The STEREO measurements revealed a strong difference in electron peak intensities (factor ≤12) showing unexpected intensity distributions at 1 AU, although the two spacecraft had nominally nearly the same angular magnetic footpoint separation from the flaring active region (AR) or their magnetic footpoints were both situated eastwards from the parent particle source. Furthermore, the events detected by the two STEREO imply a strongly unexpected onset timing with respect to each other: the spacecraft magnetically best connected to the flare detected a later arrival of electrons than the other one. This leads us to suggest the concept of a rippled peak intensity distribution at 1 AU formed by narrow peaks (fingers) superposed on a quasi-uniform Gaussian distribution. Additionally, two of the four investigated solar energetic particle (SEP) events show a so-called circumsolar distribution and their characteristics make it plausible to suggest a two-component particle injection scenario forming an unusual, non-uniform intensity distribution at 1 AU.

  15. Feasibility of remote evaporation and precipitation estimates. [by stereo images

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.

    1974-01-01

    Remote sensing by means of stereo images obtained from flown cameras and scanners provides the potential to monitor the dynamics of pollutant mixing over large areas. Moreover, stereo technology may permit monitoring of pollutant concentration and mixing with sufficient detail to ascertain the structure of a polluted air mass. Consequently, stereo remote systems can be employed to supply data to set forth adequate regional standards on air quality. A method of remote sensing using stereo images is described. Preliminary results concerning the planar extent of a plume based on comparison with ground measurements by an alternate method, e.g., remote hot-wire anemometer technique, are supporting the feasibility of using stereo remote sensing systems.

  16. Characteristics of stereo reproduction with parametric loudspeakers

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa

    2012-05-01

    A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.

  17. Stereo matching using epipolar distance transform.

    PubMed

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image. PMID:22801509

  18. Infrared stereo camera for human machine interface

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  19. Deep 'Stone Soup' Trenching by Phoenix (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Digging by NASA's Phoenix Mars Lander on Aug. 23, 2008, during the 88th sol (Martian day) since landing, reached a depth about three times greater than in any trench Phoenix has excavated. The deep trench, informally called 'Stone Soup' is at the borderline between two of the polygon-shaped hummocks that characterize the arctic plain where Phoenix landed.

    Stone Soup is in the center foreground of this stereo view, which appears three dimensional when seen through red-blue glasses. The view combines left-eye and right-eye images taken by the lander's Surface Stereo Imager on Sol 88 after the day's digging. The trench is about 25 centimeters (10 inches) wide and about 18 centimeters (7 inches) deep.

    When digging trenches near polygon centers, Phoenix has hit a layer of icy soil, as hard as concrete, about 5 centimeters or 2 inches beneath the ground surface. In the Stone Soup trench at a polygon margin, the digging has not yet hit an icy layer like that.

    Stone Soup is toward the left, or west, end of the robotic arm's work area on the north side of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  20. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.