Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Object tracking with stereo vision
NASA Technical Reports Server (NTRS)
Huber, Eric
1994-01-01
A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
The contribution of stereo vision to the control of braking.
Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu
2008-03-01
In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision
NASA Astrophysics Data System (ADS)
Gai, Qiyang
2018-01-01
Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Read, J C A
2015-01-01
Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
Stereo vision with distance and gradient recognition
NASA Astrophysics Data System (ADS)
Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu
2007-12-01
Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
Stereo 3-D Vision in Teaching Physics
ERIC Educational Resources Information Center
Zabunov, Svetoslav
2012-01-01
Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
Research on the feature set construction method for spherical stereo vision
NASA Astrophysics Data System (ADS)
Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia
2015-01-01
Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
The research on calibration methods of dual-CCD laser three-dimensional human face scanning system
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong
2013-09-01
In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.
Improved stereo matching applied to digitization of greenhouse plants
NASA Astrophysics Data System (ADS)
Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng
2015-03-01
The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
Neural architectures for stereo vision.
Parker, Andrew J; Smith, Jackson E T; Krug, Kristine
2016-06-19
Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.
An assembly system based on industrial robot with binocular stereo vision
NASA Astrophysics Data System (ADS)
Tang, Hong; Xiao, Nanfeng
2017-01-01
This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
A stereo-vision hazard-detection algorithm to increase planetary lander autonomy
NASA Astrophysics Data System (ADS)
Woicke, Svenja; Mooij, Erwin
2016-05-01
For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.
Stereo vision for ecohydraulic research: Seashell reconstruction
NASA Astrophysics Data System (ADS)
Friedrich, H.; Bertin, S.; Montgomery, J. C.; Thrush, S. F.; Delmas, P.
2016-12-01
3D information of underwater topographies can be obtained more easily nowadays. In general, those measurements do not provide the spatial nor temporal detail for more specific research of dynamic processes, such as sediment transport. More recently, we have seen the advance of true interdisciplinary ecohydraulics research initiatives. One important research avenue is the interaction of organisms with flow and sediment. We have used stereo vision substantially for fluvial morphology studies over the last years, and will present and discuss the use of stereo vision in ecohydraulic research. The work is undertaken in the laboratory, and we present a workflow of reconstructing seashells. We obtain shape and dimensional information, which are important to better understand the organism's interaction in the natural water environment. Although we find that stereo vision is suitable to capture our studied organisms, the challenge of studying organisms in their natural environments persists. We discuss the limitations of our approach, and the need to fuse technical and behavioural knowledge to better manage our ecosystems.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)
Zhang, Xiang; Chen, Zhangwei
2013-01-01
This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385
What is stereoscopic vision good for?
NASA Astrophysics Data System (ADS)
Read, Jenny C. A.
2015-03-01
Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.
SVM based colon polyps classifier in a wireless active stereo endoscope.
Ayoub, J; Granado, B; Mhanna, Y; Romain, O
2010-01-01
This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.
Small or far away? Size and distance perception in the praying mantis
Bissianna, Geoffrey
2016-01-01
Stereo or ‘3D’ vision is an important but costly process seen in several evolutionarily distinct lineages including primates, birds and insects. Many selective advantages could have led to the evolution of stereo vision, including range finding, camouflage breaking and estimation of object size. In this paper, we investigate the possibility that stereo vision enables praying mantises to estimate the size of prey by using a combination of disparity cues and angular size cues. We used a recently developed insect 3D cinema paradigm to present mantises with virtual prey having differing disparity and angular size cues. We predicted that if they were able to use these cues to gauge the absolute size of objects, we should see evidence for size constancy where they would strike preferentially at prey of a particular physical size, across a range of simulated distances. We found that mantises struck most often when disparity cues implied a prey distance of 2.5 cm; increasing the implied distance caused a significant reduction in the number of strikes. We, however, found no evidence for size constancy. There was a significant interaction effect of the simulated distance and angular size on the number of strikes made by the mantis but this was not in the direction predicted by size constancy. This indicates that mantises do not use their stereo vision to estimate object size. We conclude that other selective advantages, not size constancy, have driven the evolution of stereo vision in the praying mantis. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269605
NASA Astrophysics Data System (ADS)
Li, Peng; Chong, Wenyan; Ma, Yongjun
2017-10-01
In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Investigating the Importance of Stereo Displays for Helicopter Landing Simulation
2016-08-11
visualization. The two instances of X Plane® were implemented using two separate PCs, each incorporating Intel i7 processors and Nvidia Quadro K4200... Nvidia GeForce GTX 680 graphics card was used to administer the stereo acuity and fusion range tests. The tests were displayed on an Asus VG278HE 3D...monitor with 1920x1080 pixels that was compatible with Nvidia 3D Vision2 and that used active shutter glasses. At a 1-m viewing distance, the
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
NASA Technical Reports Server (NTRS)
Leberl, F. W.
1979-01-01
The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.
Three-camera stereo vision for intelligent transportation systems
NASA Astrophysics Data System (ADS)
Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.
1997-02-01
A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.
ROS-based ground stereo vision detection: implementation and experiments.
Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng
This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.
Attenuating Stereo Pixel-Locking via Affine Window Adaptation
NASA Technical Reports Server (NTRS)
Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.
2006-01-01
For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.
2009-01-01
The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Systematic construction and control of stereo nerve vision network in intelligent manufacturing
NASA Astrophysics Data System (ADS)
Liu, Hua; Wang, Helong; Guo, Chunjie; Ding, Quanxin; Zhou, Liwei
2017-10-01
A system method of constructing stereo vision by using neural network is proposed, and the operation and control mechanism in actual operation are proposed. This method makes effective use of the neural network in learning and memory function, by after training with samples. Moreover, the neural network can learn the nonlinear relationship in the stereoscopic vision system and the internal and external orientation elements. These considerations are Worthy of attention, which includes limited constraints, the scientific of critical group, the operating speed and the operability in technical aspects. The results support our theoretical forecast.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
Abu Bakar, Nurul Farhana; Chen, Ai-Hong
2014-02-01
Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Non verbal or "matching" approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Classification of road sign type using mobile stereo vision
NASA Astrophysics Data System (ADS)
McLoughlin, Simon D.; Deegan, Catherine; Fitzgerald, Conor; Markham, Charles
2005-06-01
This paper presents a portable mobile stereo vision system designed for the assessment of road signage and delineation (lines and reflective pavement markers or "cat's eyes"). This novel system allows both geometric and photometric measurements to be made on objects in a scene. Global Positioning System technology provides important location data for any measurements made. Using the system it has been shown that road signs can be classfied by nature of their reflectivity. This is achieved by examining the changes in the reflected light intensity with changes in range (facilitated by stereo vision). Signs assessed include those made from retro-reflective materials, those made from diffuse reflective materials and those made from diffuse reflective matrials with local illumination. Field-testing results demonstrate the systems ability to classify objects in the scene based on their reflective properties. The paper includes a discussion of a physical model that supports the experimental data.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling the convergence accommodation of stereo vision for binocular endoscopy.
Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin
2018-02-01
The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.
Recovering stereo vision by squashing virtual bugs in a virtual reality environment.
Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M
2016-06-19
Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).
Recovering stereo vision by squashing virtual bugs in a virtual reality environment
Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne
2016-01-01
Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607
Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system
NASA Astrophysics Data System (ADS)
Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping
2015-05-01
Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Abu Bakar, Nurul Farhana; Chen, Ai-Hong
2014-01-01
Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790
Design of interpolation functions for subpixel-accuracy stereo-vision systems.
Haller, Istvan; Nedevschi, Sergiu
2012-02-01
Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE
Constraint-based stereo matching
NASA Technical Reports Server (NTRS)
Kuan, D. T.
1987-01-01
The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.
Topography from shading and stereo
NASA Technical Reports Server (NTRS)
Horn, Berthold K. P.
1994-01-01
Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.
A novel method of robot location using RFID and stereo vision
NASA Astrophysics Data System (ADS)
Chen, Diansheng; Zhang, Guanxin; Li, Zhen
2012-04-01
This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.
Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision
NASA Astrophysics Data System (ADS)
Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.
2003-08-01
Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
Consequences of Incorrect Focus Cues in Stereo Displays
Banks, Martin S.; Akeley, Kurt; Hoffman, David M.; Girshick, Ahna R.
2010-01-01
Conventional stereo displays produce images in which focus cues – blur and accommodation – are inconsistent with the simulated depth. We have developed new display techniques that allow the presentation of nearly correct focus. Using these techniques, we find that stereo vision is faster and more accurate when focus cues are mostly consistent with simulated depth; furthermore, viewers experience less fatigue when focus cues are correct or nearly correct. PMID:20523910
Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images
NASA Astrophysics Data System (ADS)
Liu, J.; Ji, S.; Zhang, C.; Qin, Z.
2018-05-01
Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.
Rumei Zhang; Hao Liu; Jianda Han
2017-07-01
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model
NASA Astrophysics Data System (ADS)
Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose
1999-01-01
This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.
Characterization of Stereo Vision Performance for Roving at the Lunar Poles
NASA Technical Reports Server (NTRS)
Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry
2016-01-01
Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.
Owls see in stereo much like humans do.
van der Willigen, Robert F
2011-06-10
While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.
Real-time depth processing for embedded platforms
NASA Astrophysics Data System (ADS)
Rahnama, Oscar; Makarov, Aleksej; Torr, Philip
2017-05-01
Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
NASA Astrophysics Data System (ADS)
Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun
2014-12-01
A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Computer Vision Research and its Applications to Automated Cartography
1985-09-01
D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of
Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry
ERIC Educational Resources Information Center
Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva
2015-01-01
Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
GPU-based real-time trinocular stereo vision
NASA Astrophysics Data System (ADS)
Yao, Yuanbin; Linton, R. J.; Padir, Taskin
2013-01-01
Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.
Stereo imaging with spaceborne radars
NASA Technical Reports Server (NTRS)
Leberl, F.; Kobrick, M.
1983-01-01
Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
2015-08-21
using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
Evaluation of the stereo optical OPTEC(R)5000 for aeromedical color vision screening.
DOT National Transportation Integrated Search
2013-08-01
Screening tests are valued for their ability to detect the presence (test sensitivity) and the absence (test specificity) of a disease or a specific condition such as color vision deficiencies(CVDs). From an aviation safety standpoint, it is importan...
Possibilities and limitations of current stereo-endoscopy.
Mueller-Richter, U D A; Limberger, A; Weber, P; Ruprecht, K W; Spitzer, W; Schilling, M
2004-06-01
Stereo-endoscopy has become a commonly used technology. In many comparative studies striking advantages of stereo-endoscopy over two-dimensional presentation could not be proven. To show the potential and fields for further improvement of this technology is the aim of this article. The physiological basis of three-dimensional vision limitations of current stereo-endoscopes is discussed and fields for further research are indicated. New developments in spatial picture acquisition and spatial picture presentation are discussed. Current limitations of stereo-endoscopy that prevent a better ranking in comparative studies with two-dimensional presentation are mainly based on insufficient picture acquisition. Devices for three-dimensional picture presentation are at a more advanced developmental stage than devices for three-dimensional picture acquisition. Further research should emphasize the development of new devices for three-dimensional picture acquisition.
Stereoscopy and the Human Visual System
Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.
2012-01-01
Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
NASA Astrophysics Data System (ADS)
Lee, Hyunki; Kim, Min Young; Moon, Jeon Il
2017-12-01
Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.
Nanosatellite Maneuver Planning for Point Cloud Generation With a Rangefinder
2015-06-05
aided active vision systems [11], dense stereo [12], and TriDAR [13]. However, these systems are unsuitable for a nanosatellite system from power, size...command profiles as well as improving the fidelity of gap detection with better filtering methods for background objects . For example, attitude...application of a single beam laser rangefinder (LRF) to point cloud generation, shape detection , and shape reconstruction for a space-based space
A mixed reality approach for stereo-tomographic quantification of lung nodules.
Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge
2016-05-25
To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.
NASA Astrophysics Data System (ADS)
Hofmann, Ulrich; Siedersberger, Karl-Heinz
2003-09-01
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering
NASA Astrophysics Data System (ADS)
Onishi, Masaki; Yoda, Ikushi
In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Ubiquitous Stereo Vision for Controlling Safety on Platforms in Railroad Station
NASA Astrophysics Data System (ADS)
Yoda, Ikushi; Hosotani, Daisuke; Sakaue, Katushiko
Dozens of people are killed every year when they fall off of train platforms, making this an urgent issue to be addressed by the railroads, especially in the major cities. This concern prompted the present work that is now in progress to develop a Ubiquitous Stereo Vision based system for safety management at the edge of rail station platforms. In this approach, a series of stereo cameras are installed in a row on the ceiling that are pointed downward at the edge of the platform to monitor the disposition of people waiting for the train. The purpose of the system is to determine automatically and in real-time whether anyone or anything is in the danger zone at the very edge of the platform, whether anyone has actually fallen off the platform, or whether there is any sign of these things happening. The system could be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble.
NASA Astrophysics Data System (ADS)
Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko
Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
Analysis and design of stereoscopic display in stereo television endoscope system
NASA Astrophysics Data System (ADS)
Feng, Dawei
2008-12-01
Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.
Application of Stereo Vision to the Reconnection Scaling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.
The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less
Real-time tracking using stereo and motion: Visual perception for space robotics
NASA Technical Reports Server (NTRS)
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
1994-01-01
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
Acquisition of stereo panoramas for display in VR environments
NASA Astrophysics Data System (ADS)
Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan
2011-03-01
Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
3D Observations techniques for the solar corona
NASA Astrophysics Data System (ADS)
Portier-Fozzani, F.; Papadopoulo, T.; Fermin, I.; Bijaoui, A.; Stereo/Secchi 3D Team; et al.
In this talk, we will present a review of the different 3D techniques concerning observations of the solar corona made by EUV imageur (such as SOHO/EIT and STEREO/SECCHI) and by coronagraphs (SOHO/LASCO and STEREO/SECCHI). Tomographic reconstructions need magnetic extrapolation to constraint the model (classical triangle mash reconstruction, or more evoluated pixon method). For 3D reconstruction the other approach is stereovision. Stereoscopic techniques are built in a specific way to take into account the optical thin medium of the solar corona, which makes most of the classical stereo method not directly applicable. To improve such method we need to take into account how to describe an image by computer vision : an image is not only a set of intensities but its descriptions/representations in term of sub-objects is needed for the structures extractions and matching. We will describe optical flow methods to follow the structures, and decomposition in sub-areas depending of the solar cycle. After recalling results obtained with geometric loops reconstructions and their consequences for twist measurement and helicity evaluation, we will describe how we can mix pixel and conceptual recontruction for stereovision. We could then include epipolar geometry and Multiscale Vision Model (MVM) to enhance the reconstruction. These concepts are under development for STEREO/SECCHI.
2018-01-01
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267
Stereo vision techniques for telescience
NASA Astrophysics Data System (ADS)
Hewett, S.
1990-02-01
The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.
2013-08-01
Aerospace Medical Institute’s publications Web site: www.faa.gov/go/oamtechreports i Technical Report Documentation Page 1. Report No. 2...color vision ( NCV ), specificity is very important. The FAA has a color vision standard for airmen and air traffic controllers because of the...aeromedical use, and the FAA found that instrument failed 50% of those with NCV . The manufacturer made some modifications and requested a re-evaluation. The
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image
NASA Astrophysics Data System (ADS)
Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren
2012-01-01
The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
The research of binocular vision ranging system based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Shikuan; Yang, Xu
2017-10-01
Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
NASA Technical Reports Server (NTRS)
Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.
1981-01-01
The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.
A stereo vision-based obstacle detection system in vehicles
NASA Astrophysics Data System (ADS)
Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun
2008-02-01
Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.
Stereoacuity of preschool children with and without vision disorders.
Ciner, Elise B; Ying, Gui-Shuang; Kulp, Marjean Taylor; Maguire, Maureen G; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Huang, Jiayan
2014-03-01
To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study-targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study-targeted disorders overall and by age. Children with VIP Study-targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). The presence of any VIP Study-targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder or no vision disorders. Testability was excellent at all ages. These results support the validity of the Stereo Smile II for assessing random-dot stereoacuity in preschool children.
Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il
2009-07-20
Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
NASA Astrophysics Data System (ADS)
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Study on portable optical 3D coordinate measuring system
NASA Astrophysics Data System (ADS)
Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao
2009-05-01
A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation
NASA Astrophysics Data System (ADS)
Bugayong, Albert; Ramos, Manuel, Jr.
2018-02-01
Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.
A comparison of static near stereo acuity in youth baseball/softball players and non-ball players.
Boden, Lauren M; Rosengren, Kenneth J; Martin, Daniel F; Boden, Scott D
2009-03-01
Although many aspects of vision have been investigated in professional baseball players, few studies have been performed in developing athletes. The issue of whether youth baseball players have superior stereopsis to nonplayers has not been addressed specifically. The purpose of this study was to determine if youth baseball/softball players have better stereo acuity than non-ball players. Informed consent was obtained from 51 baseball/softball players and 52 non-ball players (ages 10 to 18 years). Subjects completed a questionnaire, and their static near stereo acuity was measured using the Randot Stereotest (Stereo Optical Company, Chicago, Illinois). Stereo acuity was measured as the seconds of arc between the last pair of images correctly distinguished by the subject. The mean stereo acuity score was 25.5 +/- 1.7 seconds of arc in the baseball/softball players and 56.2 +/- 8.4 seconds of arc in the non-ball players. This difference was statistically significant (P < 0.00001). In addition, a perfect stereo acuity score of 20 seconds of arc was seen in 61% of the ball players and only 23% of the non-ball players (P = 0.0001). Youth baseball/softball players had significantly better static stereo acuity than non-ball players, comparable to professional ball players.
2007-01-01
Purpose Preschool vision screenings often include refractive error or visual acuity (VA) testing to detect amblyopia, as well as alignment testing to detect strabismus. The purpose of this study was to determine the effect of combining screening for eye alignment with screening for refractive error or reduced VA on sensitivity for detection of strabismus, with specificity set at 90% and 94%. Methods Over 3 years, 4040 preschool children were screened in the Vision in Preschoolers (VIP) Study, with different screening tests administered each year. Examinations were performed to identify children with strabismus. The best screening tests for detecting children with any targeted condition were noncycloplegic retinoscopy (NCR), Retinomax autorefractor (Right Manufacturing, Virginia Beach, VA), SureSight Vision Screener (Welch-Allyn, Inc., Skaneateles, NY), and Lea Symbols (Precision Vision, LaSalle, IL and Good-Lite Co., Elgin, IL) and HOTV optotypes VA tests. Analyses were conducted with these tests of refractive error or VA paired with the best tests for detecting strabismus (unilateral cover testing, Random Dot “E” [RDE] and Stereo Smile Test II [Stereo Optical, Inc., Chicago, IL]; and MTI PhotoScreener [PhotoScreener, Inc., Palm Beach, FL]). The change in sensitivity that resulted from combining a test of eye alignment with a test of refractive error or VA was determined with specificity set at 90% and 94%. Results Among the 4040 children, 157 were identified as having strabismus. For screening tests conducted by eye care professionals, the addition of a unilateral cover test to a test of refraction generally resulted in a statistically significant increase (range, 15%–25%) in detection of strabismus. For screening tests administered by trained lay screeners, the addition of Stereo Smile II to SureSight resulted in a statistically significant increase (21%) in sensitivity for detection of strabismus. Conclusions The most efficient and low-cost ways to achieve a statistically significant increase in sensitivity for detection of strabismus were by combining the unilateral cover test with the autorefractor (Retinomax) administered by eye care professionals and by combining Stereo Smile II with SureSight administered by trained lay screeners. The decision of whether to include a test of alignment should be based on the screening program’s goals (e.g., targeted visual conditions) and resources. PMID:17591881
Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images
NASA Technical Reports Server (NTRS)
Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.
2011-01-01
A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
Ranging through Gabor logons-a consistent, hierarchical approach.
Chang, C; Chatterjee, S
1993-01-01
In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.
Dynamic programming and graph algorithms in computer vision.
Felzenszwalb, Pedro F; Zabih, Ramin
2011-04-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Railway clearance intrusion detection method with binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhou, Xingfang; Guo, Baoqing; Wei, Wei
2018-03-01
In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.
When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.
Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui
2018-05-01
In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-09-14
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.
Bloch, Edward; Uddin, Nabil; Gannon, Laura; Rantell, Khadija; Jain, Saurabh
2015-01-01
Background Stereopsis is believed to be advantageous for surgical tasks that require precise hand-eye coordination. We investigated the effects of short-term and long-term absence of stereopsis on motor task performance in three-dimensional (3D) and two-dimensional (2D) viewing conditions. Methods 30 participants with normal stereopsis and 15 participants with absent stereopsis performed a simulated surgical task both in free space under direct vision (3D) and via a monitor (2D), with both eyes open and one eye covered in each condition. Results The stereo-normal group scored higher, on average, than the stereo-absent group with both eyes open under direct vision (p<0.001). Both groups performed comparably in monocular and binocular monitor viewing conditions (p=0.579). Conclusions High-grade stereopsis confers an advantage when performing a fine motor task under direct vision. However, stereopsis does not appear advantageous to task performance under 2D viewing conditions, such as in video-assisted surgery. PMID:25185439
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-01-01
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178
Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor
NASA Astrophysics Data System (ADS)
Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu
In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.
Offshore remote sensing of the ocean by stereo vision systems
NASA Astrophysics Data System (ADS)
Gallego, Guillermo; Shih, Ping-Chang; Benetazzo, Alvise; Yezzi, Anthony; Fedele, Francesco
2014-05-01
In recent years, remote sensing imaging systems for the measurement of oceanic sea states have attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms, which focus on sea states with wavelengths in the range of 0.01 m to 1 m. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting oberved waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss furure lines of research to improve their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters or the processing time that it takes to retrieve ocean wave measurements from the stereo videos, which are very large datasets that need to be processed efficiently to be of practical usage. Multiresolution and short-time approaches would improve efficiency and scalability of the techniques so that wave displacements are obtained in feasible times.
Object Tracking Vision System for Mapping the UCN τ Apparatus Volume
NASA Astrophysics Data System (ADS)
Lumb, Rowan; UCNtau Collaboration
2016-09-01
The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
Dynamic Programming and Graph Algorithms in Computer Vision*
Felzenszwalb, Pedro F.; Zabih, Ramin
2013-01-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950
Bubble behavior characteristics based on virtual binocular stereo vision
NASA Astrophysics Data System (ADS)
Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen
2018-01-01
The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.
Disparity channels in early vision
Roe, AW; Parker, AJ; Born, RT; DeAngelis, GC
2008-01-01
The last decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of 3D vision that enhance our daily experience of the world. PMID:17978018
NASA Astrophysics Data System (ADS)
Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang
2017-10-01
Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.
Global Methods for Image Motion Analysis
1992-10-01
a variant of the same error function as in Adiv [2]. Another related approach was presented by Maybank [46,45]. Nearly all researchers in motion...with an application to stereo vision. In Proc. 7th Intern. Joint Conference on AI, pages 674{679, Vancouver, 1981. [45] S. J. Maybank . Algorithm for...analysing optical ow based on the least-squares method. Image and Vision Computing, 4:38{42, 1986. [46] S. J. Maybank . A Theoretical Study of Optical
Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space
NASA Astrophysics Data System (ADS)
Jun, Chen; Wenjun, Hou; Qing, Sheng
After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.
Implementation of a stereofluoroscopic system
NASA Technical Reports Server (NTRS)
Rivers, D. B.
1976-01-01
Clinical applications of a 3-D video imaging technique developed by NASA for observation and control of remote manipulators are discussed. Incorporation of this technique in a stereo fluoroscopic system provides reduced radiation dosage and greater vision and mobility of the user.
Current state of the art of vision based SLAM
NASA Astrophysics Data System (ADS)
Muhammad, Naveed; Fofi, David; Ainouz, Samia
2009-02-01
The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Stereo-vision-based terrain mapping for off-road autonomous navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-05-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-01-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
A multimodal 3D framework for fire characteristics estimation
NASA Astrophysics Data System (ADS)
Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.
2018-02-01
In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.
Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images
NASA Astrophysics Data System (ADS)
Rhee, S.; Kim, T.
2017-08-01
Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1 : 1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.
The zone of comfort: Predicting visual discomfort with stereo displays
Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.
2012-01-01
Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252
The zone of comfort: Predicting visual discomfort with stereo displays.
Shibata, Takashi; Kim, Joohwan; Hoffman, David M; Banks, Martin S
2011-07-21
Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence-accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema.
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Development of a teaching system for an industrial robot using stereo vision
NASA Astrophysics Data System (ADS)
Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki
1997-12-01
The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Stereo chromatic contrast sensitivity model to blue-yellow gratings.
Yang, Jiachen; Lin, Yancong; Liu, Yun
2016-03-07
As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision
DOT National Transportation Integrated Search
2012-07-01
The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...
WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise
2017-10-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676
Bayes filter modification for drivability map estimation with observations from stereo vision
NASA Astrophysics Data System (ADS)
Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri
2017-02-01
Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.
SAD-Based Stereo Matching Using FPGAs
NASA Astrophysics Data System (ADS)
Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas
In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.
2015-07-01
As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.
Micro air vehicle autonomous obstacle avoidance from stereo-vision
NASA Astrophysics Data System (ADS)
Brockers, Roland; Kuwata, Yoshiaki; Weiss, Stephan; Matthies, Lawrence
2014-06-01
We introduce a new approach for on-board autonomous obstacle avoidance for micro air vehicles flying outdoors in close proximity to structure. Our approach uses inverse-range, polar-perspective stereo-disparity maps for obstacle detection and representation, and deploys a closed-loop RRT planner that considers flight dynamics for trajectory generation. While motion planning is executed in 3D space, we reduce collision checking to a fast z-buffer-like operation in disparity space, which allows for significant speed-up compared to full 3d methods. Evaluations in simulation illustrate the robustness of our approach, whereas real world flights under tree canopy demonstrate the potential of the approach.
CAD-model-based vision for space applications
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.
1988-01-01
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
STEREO Education and Public Outreach Efforts
NASA Technical Reports Server (NTRS)
Kucera, Therese
2007-01-01
STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Deadly Fires Engulfing Madeira (Anaglyph)
Atmospheric Science Data Center
2016-12-30
... so that north is to the left in order to enable stereo vision (the red lens must be placed over your left eye). The island of Madeira ... in 3D shows that the main body of clouds is indeed very low, while the smoke plume is much higher at the source, dropping lower as it ...
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks
NASA Astrophysics Data System (ADS)
Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min
2015-10-01
Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.
Accuracy and robustness evaluation in stereo matching
NASA Astrophysics Data System (ADS)
Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian
2016-09-01
Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.
Hybrid-Based Dense Stereo Matching
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Ting, H. W.; Jaw, J. J.
2016-06-01
Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.
NASA Astrophysics Data System (ADS)
Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.
2006-10-01
In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J
2014-09-26
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.
2014-01-01
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Real-time image processing of TOF range images using a reconfigurable processor system
NASA Astrophysics Data System (ADS)
Hussmann, S.; Knoll, F.; Edeler, T.
2011-07-01
During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.
WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro
2017-04-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F
2016-03-05
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.
2016-01-01
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030
Design issues for stereo vision systems used on tele-operated robotic platforms
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-02-01
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.
Molina-Torres, María-José; Crespo, María-del-Mar Seguí; Francés, Ana Tauste; Lacarra, Blanca Lumbreras; Ronda-Pérez, Elena
2016-01-01
Objective: To compare the diagnostic accuracy of two vision screeners by a visual examination performed by an optometrist (gold standard) and to evaluate the concordance between both screeners and between each screener and the gold standard. Methods: This was a cross-sectional study that included computer workers who attended a routine yearly health examination. The study included administrative office workers (n=91) aged 50.2±7.9 years (mean±standard deviation), 69.2% of whom were women and 68.1% of whom used video display terminals (VDT) for >4 h/day. The routine visual examination included monocular and binocular distance visual acuity (VA), distance and near lateral phoria (LP), stereo acuity (SA), and color vision. Tests were repeated with Optec 6500 (by Stereo Optical) and Visiotest (by Essilor) screeners. Sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and false positive and negative rates were calculated. Kappa coefficient (κ) was used to measure the concordance of the screeners and the gold standard. Results: The sensitivity and specificity for monocular VA were over 80% for both vision screeners; PPV was below 25%. Sensitivity and specificity were lower for SA (55%-70%), PPV was 50%, and NPV was 75% for both screeners. For distance LP, sensitivity and PPV were <10% in both cases. The screeners differed in their values for near LP: Optec 6500 had higher sensitivity (43.5%), PPV (37.0%), and NPV (79.7%); whereas the Visiotest had higher specificity (83.8%). For color vision, Visiotest showed low sensitivity, low PPV, and high specificity. Visiotest obtained false positive rates that were lower or similar to Optec 6500, and both screeners obtained false negative rates below 50%. Both screeners showed poor concordance (κ<0.40). Conclusions: A high value for NPV would qualify both screeners as acceptable alternatives for visual health surveillance when used as a screening tool; patients with positive test results should be referred to a specialist. PMID:27488039
Demonstration of a viable quantitative theory for interplanetary type II radio bursts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, J. M., E-mail: jschmidt@physics.usyd.edu.au; Cairns, Iver H.
Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME andmore » plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 10{sup 6} and ≈ 10{sup 3}, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth’s magnetosphere and drive space weather events.« less
Demonstration of a viable quantitative theory for interplanetary type II radio bursts
NASA Astrophysics Data System (ADS)
Schmidt, J. M.; Cairns, Iver H.
2016-03-01
Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME and plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 106 and ≈ 103, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth's magnetosphere and drive space weather events.
NASA Astrophysics Data System (ADS)
López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge
2014-11-01
Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
Automatic Hazard Detection for Landers
NASA Technical Reports Server (NTRS)
Huertas, Andres; Cheng, Yang; Matthies, Larry H.
2008-01-01
Unmanned planetary landers to date have landed 'blind'; that is, without the benefit of onboard landing hazard detection and avoidance systems. This constrains landing site selection to very benign terrain,which in turn constrains the scientific agenda of missions. The state of the art Entry, Descent, and Landing (EDL) technology can land a spacecraft on Mars somewhere within a 20-100km landing ellipse.Landing ellipses are very likely to contain hazards such as craters, discontinuities, steep slopes, and large rocks, than can cause mission-fatal damage. We briefly review sensor options for landing hazard detection and identify a perception approach based on stereo vision and shadow analysis that addresses the broadest set of missions. Our approach fuses stereo vision and monocular shadow-based rock detection to maximize spacecraft safety. We summarize performance models for slope estimation and rock detection within this approach and validate those models experimentally. Instantiating our model of rock detection reliability for Mars predicts that this approach can reduce the probability of failed landing by at least a factor of 4 in any given terrain. We also describe a rock detector/mapper applied to large-high-resolution images from the Mars Reconnaissance Orbiter (MRO) for landing site characterization and selection for Mars missions.
A high resolution and high speed 3D imaging system and its application on ATR
NASA Astrophysics Data System (ADS)
Lu, Thomas T.; Chao, Tien-Hsin
2006-04-01
The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?
Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes
2016-01-01
Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006
Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?
Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes
2016-01-01
Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.
3D structure and kinematics characteristics of EUV wave front
NASA Astrophysics Data System (ADS)
Podladchikova, T.; Veronig, A.; Dissauer, K.
2017-12-01
We present 3D reconstructions of EUV wave fronts using multi-point observations from the STEREO-A and STEREO-B spacecraft. EUV waves are large-scale disturbances in the solar corona that are initiated by coronal mass ejections, and are thought to be large-amplitude fast-mode MHD waves or shocks. The aim of our study is to investigate the dynamic evolution of the 3D structure and wave kinematics of EUV wave fronts. We study the events on December 7, 2007 and February 13, 2009 using data from the STEREO/EUVI-A and EUVI-B instruments in the 195 Å filter. The proposed approach is based on a complementary combination of epipolar geometry of stereo vision and perturbation profiles. We propose two different solutions to the matching problem of the wave crest on images from the two spacecraft. One solution is suitable for the early and maximum stage of event development when STEREO-A and STEREO-B see the different facets of the wave, and the wave crest is clearly outlined. The second one is applicable also at the later stage of event development when the wave front becomes diffuse and is faintly visible. This approach allows us to identify automatically the segments of the diffuse front on pairs of STEREO-A and STEREO-B images and to solve the problem of identification and matching of the objects. We find that the EUV wave observed on December 7, 2007 starts with a height of 30-50 Mm, sharply increases to a height of 100-120 Mm about 10 min later, and decreases to 10-20 Mm in the decay phase. Including the 3D evolution of the EUV wave front allowed us to correct the wave kinematics for projection and changing height effects. The velocity of the wave crest (V=215-266 km/s) is larger than the trailing part of the wave pulse (V=103-163 km/s). For the February 9, 2009 event, the upward movement of the wave crest shows an increase from 20 to 100 Mm over a period of 30 min. The velocity of wave crest reaches values of 208-211 km/s.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Researches on hazard avoidance cameras calibration of Lunar Rover
NASA Astrophysics Data System (ADS)
Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong
2017-11-01
Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
A parallel stereo reconstruction algorithm with applications in entomology (APSRA)
NASA Astrophysics Data System (ADS)
Bhasin, Rajesh; Jang, Won Jun; Hart, John C.
2012-03-01
We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Lin, Grier C. I.
1997-12-01
A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.
The non-parametric Parzen's window in stereo vision matching.
Pajares, G; de la Cruz, J
2002-01-01
This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.
Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David
2017-11-01
The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
Innovative Inspection Techniques
1993-01-01
beam and holding the binoculars at the same time. Night-vision glasses with magnification were mentioned but no inspectors we met had direct...angles for an actual lightbulb , the mean spherical candlepower is used as a measure of light output. The MSCP is measured using an integrating...needs special glasses to separate the alternating images, one image for the right and one for the left eye. StereoGraphics Corporation has developed a
University NanoSat Program: AggieSat3
2009-06-01
commercially available product for stereo machine vision developed by Point Grey Research. The current binocular BumbleBee2® system incorporates two...and Fellow of the American Society of Mechanical Engineers (ASME) in 1997. She was awarded the 2007 J. Leland "Lee" Atwood Award from the ASEE...AggieSat2 satellite programs. Additional experience gained in the area of drawing standards, machining capabilities, solid modeling, safety
Vision-based vehicle detection and tracking algorithm design
NASA Astrophysics Data System (ADS)
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
2009-12-01
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
Change in vision, visual disability, and health after cataract surgery.
Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav
2013-04-01
Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P < 0.001) from before to 6 weeks after surgery, with further improvements of visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P < 0.001). Physical and mental health improved after surgery (P < 0.01) but had returned to presurgery level after 12 months. Vision changes did not explain visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation
2012-01-01
consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as
Operational Based Vision Assessment Research: Depth Perception
2014-11-01
13-072 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) USAF School of Aerospace Medicine...Aeromedical Research Department 2510 Fifth St. Wright-Patterson AFB, OH 45433-7913 8. PERFORMING ORGANIZATION REPORT NUMBER AFRL-SA-WP-JA-2014...tests are tests of stereopsis, such as the AFVT and AO Vectograph. Others evaluate depth perception with stereo as a contributor to performance , such
Enabling Autonomous Navigation for Affordable Scooters.
Liu, Kaikai; Mulky, Rajathswaroop
2018-06-05
Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation
NASA Astrophysics Data System (ADS)
Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.
2015-03-01
Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.
A long baseline global stereo matching based upon short baseline estimation
NASA Astrophysics Data System (ADS)
Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi
2018-05-01
In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.
Video-CRM: understanding customer behaviors in stores
NASA Astrophysics Data System (ADS)
Haritaoglu, Ismail; Flickner, Myron; Beymer, David
2013-03-01
This paper describes two real-time computer vision systems created 10 years ago that detect and track people in stores to obtain insights of customer behavior while shopping. The first system uses a single color camera to identify shopping groups in the checkout line. Shopping groups are identified by analyzing the inter-body distances coupled with the cashier's activities to detect checkout transactions start and end times. The second system uses multiple overhead narrow-baseline stereo cameras to detect and track people, their body posture and parts to understand customer interactions with products such as "customer picking a product from a shelf". In pilot studies both systems demonstrated real-time performance and sufficient accuracy to enable more detailed understanding of customer behavior and extract actionable real-time retail analytics.
Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.
Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal
2017-01-07
The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.
The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.
Chandraker, Manmohan
2016-07-01
Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.
Robotic Lunar Rover Technologies and SEI Supporting Technologies at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Klarer, Paul R.
1992-01-01
Existing robotic rover technologies at Sandia National Laboratories (SNL) can be applied toward the realization of a robotic lunar rover mission in the near term. Recent activities at the SNL-RVR have demonstrated the utility of existing rover technologies for performing remote field geology tasks similar to those envisioned on a robotic lunar rover mission. Specific technologies demonstrated include low-data-rate teleoperation, multivehicle control, remote site and sample inspection, standard bandwidth stereo vision, and autonomous path following based on both internal dead reckoning and an external position location update system. These activities serve to support the use of robotic rovers for an early return to the lunar surface by demonstrating capabilities that are attainable with off-the-shelf technology and existing control techniques. The breadth of technical activities at SNL provides many supporting technology areas for robotic rover development. These range from core competency areas and microsensor fabrication facilities, to actual space qualification of flight components that are designed and fabricated in-house.
Review On Applications Of Neural Network To Computer Vision
NASA Astrophysics Data System (ADS)
Li, Wei; Nasrabadi, Nasser M.
1989-03-01
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert
1996-01-01
The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.
Integration of prior knowledge into dense image matching for video surveillance
NASA Astrophysics Data System (ADS)
Menze, M.; Heipke, C.
2014-08-01
Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.
Three-dimensional surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.
2009-10-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
A 3D surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.
2009-08-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
The design and implementation of postprocessing for depth map on real-time extraction system.
Tang, Zhiwei; Li, Bin; Li, Huosheng; Xu, Zheng
2014-01-01
Depth estimation becomes the key technology to resolve the communications of the stereo vision. We can get the real-time depth map based on hardware, which cannot implement complicated algorithm as software, because there are some restrictions in the hardware structure. Eventually, some wrong stereo matching will inevitably exist in the process of depth estimation by hardware, such as FPGA. In order to solve the problem a postprocessing function is designed in this paper. After matching cost unique test, the both left-right and right-left consistency check solutions are implemented, respectively; then, the cavities in depth maps can be filled by right depth values on the basis of right-left consistency check solution. The results in the experiments have shown that the depth map extraction and postprocessing function can be implemented in real time in the same system; what is more, the quality of the depth maps is satisfactory.
Color-encoded distance for interactive focus positioning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Kundrat, Dennis; Lekon, Stefan; Kahrs, Lüder A.; Ortmaier, Tobias
2016-08-01
This paper presents a real-time method for interactive focus positioning in laser microsurgery. Registration of stereo vision and a surgical laser is performed in order to combine surgical scene and laser workspace information. In particular, stereo image data is processed to three-dimensionally reconstruct observed tissue surface as well as to compute and to highlight its intersection with the laser focal range. Regarding the surgical live view, three augmented reality concepts are presented providing visual feedback during manual focus positioning. A user study is performed and results are discussed with respect to accuracy and task completion time. Especially when using color-encoded distance superimposed to the live view, target positioning with sub-millimeter accuracy can be achieved in a few seconds. Finally, transfer to an intraoperative scenario with endoscopic human in vivo and cadaver images is discussed demonstrating the applicability of the image overlay in laser microsurgery.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
NASA Astrophysics Data System (ADS)
Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping
2015-11-01
The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.
High resolution hybrid optical and acoustic sea floor maps (Invited)
NASA Astrophysics Data System (ADS)
Roman, C.; Inglis, G.
2013-12-01
This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.
Viewing The Entire Sun With STEREO And SDO
NASA Astrophysics Data System (ADS)
Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.
2011-05-01
On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.
NASA Astrophysics Data System (ADS)
Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu
2017-04-01
A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover position can be measured to aid prioritisation of science targeting. Where grains or rocks are present and visible, their dimensions can be measured. Interpretation of the sedimentological features of the outcrops has also been carried out. OPCs created from stereo imagery collected in the Hanskville-Burpee Quarry showed a general coarsening-up succession with a red, well-layered mudstone overlain by stacked layers of irregular thickness and medium-coarse to pebbly sandstone layers. Cross beds/laminations, and lenses of finer sandstone were common. These features provide valuable information on their depositional environment. Development of Pro3D in preparation for application to the ExoMars 2020 and NASA 2020 missions will be centred on validation of the data and measurements. Collection of in-situ field data by a human geologist allows for direct comparison of viewer-derived measurements with those taken in the field. The research leading to these results has received funding from the UK Space Agency Aurora programme and the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE, ESA PRODEX Contracts 4000105568 "ExoMars PanCam 3D Vision" and 4000116566 "Mars 2020 Mastcam-Z 3D Vision".
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
Human machine interface by using stereo-based depth extraction
NASA Astrophysics Data System (ADS)
Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Virtual-stereo fringe reflection technique for specular free-form surface testing
NASA Astrophysics Data System (ADS)
Ma, Suodong; Li, Bo
2016-11-01
Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.
FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven
2011-01-01
High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments
Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal
2017-01-01
The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851
On-line bolt-loosening detection method of key components of running trains using binocular vision
NASA Astrophysics Data System (ADS)
Xie, Yanxia; Sun, Junhua
2017-11-01
Bolt loosening, as one of hidden faults, affects the running quality of trains and even causes serious safety accidents. However, the developed fault detection approaches based on two-dimensional images cannot detect bolt-loosening due to lack of depth information. Therefore, we propose a novel online bolt-loosening detection method using binocular vision. Firstly, the target detection model based on convolutional neural network (CNN) is used to locate the target regions. And then, stereo matching and three-dimensional reconstruction are performed to detect bolt-loosening faults. The experimental results show that the looseness of multiple bolts can be characterized by the method simultaneously. The measurement repeatability and precision are less than 0.03mm, 0.09mm respectively, and its relative error is controlled within 1.09%.
Parallax scanning methods for stereoscopic three-dimensional imaging
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2012-03-01
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.
MARVEL: A System for Recognizing World Locations with Stereo Vision
1990-05-01
12. REPORT OATE Advanced Research P rojects Agency May 1_990 1400 Wilson Blvd. IS. NUMBER OF PAGES Arlington, VA 22209 245 4 MONITORING AGENCY NAME A...in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-85- K-0124 and under Army...priori knowledge of the locations of the obstacles in the enviorment as well as the start and goal locations. In this thesis, however, I am concerned with
Curve and Polygon Evolution Techniques for Image Processing
2002-01-01
iterative image registration technique with an application to stereo vision. IJCAI, pages 674–679, 1981. 127 [93] R . Malladi , J.A. Sethian, and B.C...Notation A digital image to be processed is a 2-Dimensional (2-D) function denoted by I , I : ! R , where R2 is the domain of the function. Processing a...function Io(x; y), which depends on two spatial variables, x 2 R , and y 2 R , via a partial differential equation (PDE) takes the form; It = A(I; Ix
Game design in virtual reality systems for stroke rehabilitation.
Goude, Daniel; Björk, Staffan; Rydmark, Martin
2007-01-01
We propose a model for the structured design of games for post-stroke rehabilitation. The model is based on experiences with game development for a haptic and stereo vision immersive workbench intended for daily use in stroke patients' homes. A central component of this rehabilitation system is a library of games that are simultaneously entertaining for the patient and beneficial for rehabilitation [1], and where each game is designed for specific training tasks through the use of the model.
Vision Based Localization in Urban Environments
NASA Technical Reports Server (NTRS)
McHenry, Michael; Cheng, Yang; Matthies, Larry
2005-01-01
As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.
a Comparison Between Active and Passive Techniques for Underwater 3d Applications
NASA Astrophysics Data System (ADS)
Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M.
2011-09-01
In the field of 3D scanning, there is an increasing need for more accurate technologies to acquire 3D models of close range objects. Underwater exploration, for example, is very hard to perform due to the hostile conditions and the bad visibility of the environment. Some application fields, like underwater archaeology, require to recover tridimensional data of objects that cannot be moved from their site or touched in order to avoid possible damages. Photogrammetry is widely used for underwater 3D acquisition, because it requires just one or two digital still or video cameras to acquire a sequence of images taken from different viewpoints. Stereo systems composed by a pair of cameras are often employed on underwater robots (i.e. ROVs, Remotely Operated Vehicles) and used by scuba divers, in order to survey archaeological sites, reconstruct complex 3D structures in aquatic environment, estimate in situ the length of marine organisms, etc. The stereo 3D reconstruction is based on the triangulation of corresponding points on the two views. This requires to find in both images common points and to match them (correspondence problem), determining a plane that contains the 3D point on the object. Another 3D technique, frequently used in air acquisition, solves this point-matching problem by projecting structured lighting patterns to codify the acquired scene. The corresponding points are identified associating a binary code in both images. In this work we have tested and compared two whole-field 3D imaging techniques (active and passive) based on stereo vision, in underwater environment. A 3D system has been designed, composed by a digital projector and two still cameras mounted in waterproof housing, so that it can perform the various acquisitions without changing the configuration of optical devices. The tests were conducted in a water tank in different turbidity conditions, on objects with different surface properties. In order to simulate a typical seafloor, we used various concentrations of clay. The performances of the two techniques are described and discussed. In particular, the point clouds obtained are compared in terms of number of acquired 3D points and geometrical deviation.
Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO
NASA Astrophysics Data System (ADS)
Schroeder, P. C.; Luhmann, J. G.; Marchant, W.
2011-12-01
The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.
Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System
García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704
Complete vision-based traffic sign recognition supported by an I2V communication system.
García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.
Vision based flight procedure stereo display system
NASA Astrophysics Data System (ADS)
Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng
2008-03-01
A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.
Relating binocular and monocular vision in strabismic and anisometropic amblyopia.
Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D
2006-06-01
To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.
Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James
2007-01-01
This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.
Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)
NASA Technical Reports Server (NTRS)
Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James
2007-01-01
This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.
Real-time registration of video with ultrasound using stereo disparity
NASA Astrophysics Data System (ADS)
Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John
2012-02-01
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.
NASA Technical Reports Server (NTRS)
Blackmon, Theodore
1998-01-01
Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.
A Vision System For A Mars Rover
NASA Astrophysics Data System (ADS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1987-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
A vision system for a Mars rover
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1988-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
The infection algorithm: an artificial epidemic approach for dense stereo correspondence.
Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne
2006-01-01
We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
Stereo pair design for cameras with a fovea
NASA Technical Reports Server (NTRS)
Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.
1992-01-01
We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.
Applied algorithm in the liner inspection of solid rocket motors
NASA Astrophysics Data System (ADS)
Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet
2018-03-01
In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.
Terrain Model Registration for Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam
2003-01-01
This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
NASA Astrophysics Data System (ADS)
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
VenSAR on EnVision: Taking earth observation radar to Venus
NASA Astrophysics Data System (ADS)
Ghail, Richard C.; Hall, David; Mason, Philippa J.; Herrick, Robert R.; Carter, Lynn M.; Williams, Ed
2018-02-01
Venus should be the most Earth-like of all our planetary neighbours: its size, bulk composition and distance from the Sun are very similar to those of Earth. How and why did it all go wrong for Venus? What lessons can be learned about the life story of terrestrial planets in general, in this era of discovery of Earth-like exoplanets? Were the radically different evolutionary paths of Earth and Venus driven solely by distance from the Sun, or do internal dynamics, geological activity, volcanic outgassing and weathering also play an important part? EnVision is a proposed ESA Medium class mission designed to take Earth Observation technology to Venus to measure its current rate of geological activity, determine its geological history, and the origin and maintenance of its hostile atmosphere, to understand how Venus and Earth could have evolved so differently. EnVision will carry three instruments: the Venus Emission Mapper (VEM); the Subsurface Radar Sounder (SRS); and VenSAR, a world-leading European phased array synthetic aperture radar that is the subject of this article. VenSAR will obtain images at a range of spatial resolutions from 30 m regional coverage to 1 m images of selected areas; an improvement of two orders of magnitude on Magellan images; measure topography at 15 m resolution vertical and 60 m spatially from stereo and InSAR data; detect cm-scale change through differential InSAR, to characterise volcanic and tectonic activity, and estimate rates of weathering and surface alteration; and characterise of surface mechanical properties and weathering through multi-polar radar data. These data will be directly comparable with Earth Observation radar data, giving geoscientists unique access to an Earth-sized planet that has evolved on a radically different path to our own, offering new insights on the Earth-sized exoplanets across the galaxy.
STEREO/Waves Education and Public Outreach
NASA Astrophysics Data System (ADS)
MacDowall, R. J.; Bougeret, J.; Bale, S. D.; Goetz, K.; Kaiser, M. L.
2005-05-01
We present the education and public outreach plan and activities of the STEREO Waves (aka SWAVES) investigation. SWAVES measures radio emissions from the solar corona, interplanetary medium, and terrestrial magnetosphere, as well as in situ waves in the solar wind. In addition to the web site components that display stereo/multi-spacecraft data in a graphical form and explain the science and instruments, we will focus on the following three areas of EPO: class-room demonstrations using models of the STEREO spacecraft with battery powered radio receivers (and speakers) to illustrate spacecraft radio direction finding, teacher developed and tested class-room activities using SWAVES solar radio observations to motivate geometry and trigonometry, and sound-based delivery of characteristic radio and plasma wave events from the SWAVES web site for accessibility and esthetic reasons. Examples of each element will be demonstrated.
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-03-01
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.
Image-based ranging and guidance for rotorcraft
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.
Stereo transparency and the disparity gradient limit
NASA Technical Reports Server (NTRS)
McKee, Suzanne P.; Verghese, Preeti
2002-01-01
Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.
X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.
Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young
2016-04-01
In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.
The design of visible system for improving the measurement accuracy of imaging points
NASA Astrophysics Data System (ADS)
Shan, Qiu-sha; Li, Gang; Zeng, Luan; Liu, Kai; Yan, Pei-pei; Duan, Jing; Jiang, Kai
2018-02-01
It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1#65285;, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f '=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm) the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
NASA Astrophysics Data System (ADS)
Portier-Fozzani, F.; Noens, J.-C.
In this presentation, I will present different techniques for 3D coronal structures reconstructions. Multiscale vision model (MVM, collaboration with A. Bijaoui) based on wavelet decomposition were used to prepare data. With SOHO/EIT, geometrical constraints were added to be able to measure by stereovision loop size parameters. Thus from these parameters, while including information of several observation wavelenghts, it has been possible by using the CHIANTI code to derive temperature and density along and across the loops, and thus to determine loops physical properties. During the emergence of a new active region, a more sophisticated method, was made to measure the twist degree variations. Loops appear twisted and detwist as expand. The magnetic helicity conservation gives thus important criteria to derive the limit of the stability for a non forced phenomena. Sigmoids, twisted ARLs, sheared filament are related with flares and CMEs. In that case 3D measurement can say upon which level of twist the structure will become unstable. With basic geometrical measures, it has been seen that a new active region reconnected a sigmoide leading to a flare. Also, for CMEs, the measure of the filament ejection angle from stereo EUV images, and the following of temporal evolution from coronagraphic measurement such as done by HACO at the Pic Du Midi Observatory, gives possibility to determine if the CME is coming toward the Earth, and when eventually would be the impact with the magnetosphere. The input of new missions such as STEREO/SECCHI would allow us to better understood the coronal dynamic. Such joined observations GBO-space, used simultaneously together with 3D methods, will allow to develop efficiently forecasting for Space Weather.
Sputtering, Surging Sun [HD Video
2017-12-08
STEREO (Ahead) caught the action as one edge of a single active region spurted out more than a dozen surges of plasma in less than two days (Feb. 15-16, 2010). As seen in extreme UV light, the surges were narrow and directional outbursts driven by intense magnetic activity in the active region. While these kinds of outbursts have been observed numerous times, it was the frequency of so many surges in a short span of time that caught our attention. In this wavelength of UV light we are seeing singly ionized Helium at about 60,000 degrees C. For more information: stereo.gsfc.nasa.gov/ Credit: NASA/GSFC/STEREO To learn more about NASA's Sun Earth Day go here: sunearthday.nasa.gov/2010/index.php
Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement
NASA Astrophysics Data System (ADS)
Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing
2010-10-01
In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.
NASA Astrophysics Data System (ADS)
Hannachi, Ammar; Kohler, Sophie; Lallement, Alex; Hirsch, Ernest
2015-04-01
3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.
Hand-Eye Calibration of Robonaut
NASA Technical Reports Server (NTRS)
Nickels, Kevin; Huber, Eric
2004-01-01
NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, P.F.; Wang, J.S.; Chao, Y.J.
The stereo vision is used to study the fracture behavior in the compact tension (CT) specimen made from 304L stainless steel. During crack tip blunting, initiation, and growth in the CT specimen, both in-plane and out-of-plane displacement fields near the crack tip are measured by the stereo vision. Based on the plane stress assumption and the deformation theory of plasticity, the J integral is evaluated along several rectangular paths surrounding the crack tip by using the measured in-plane displacement field. Prior to crack growth, the J integral is path independent. For crack extension up to {Delta}a {approx} 3 mm, themore » near field J integral values are 6% to 10% lower than far field J integral values. For the crack extension of {Delta}a {approx} 4 mm, the J integral lost path independence. The far field J integral values are in good agreement with results obtained from Merkle-Corten`s formula. Both J-{Delta}a and CTOA-{Delta}a are obtained by computing the J integral value and crack tip opening angle (CTOA) at each {Delta}a. Results indicate that CTOA reached a nearly constant value at a crack extension of {Delta}a = 3 mm with a leveled resistance curve thereafter. Also, the J integral value is determined by the maximum transverse diameter of the shadow spots, which are generated by using the out-of-plane displacement field. Results indicate that for crack extension up to 0.25 mm, the J integral values evaluated by using the out-of- plane displacement are close to those obtained by using in-plane displacements and Merkle-Corten`s formula.« less
Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.
Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen
2009-11-15
The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
2017-12-08
NASA image acquired May 1, 2010. As an active region rotated into view, it blew out three relatively small eruptions over about two days (Apr. 30 - May 2) as STEREO (Ahead) observed in extreme UV light. The first one was the largest and exhibited a pronounced twisting motion (shown in the still from May 1, 2010). The plasma, not far above the Sun's surface in these images, is ionized Helium heated to about 60,000 degrees. Note, too, the movement of plasma flowing along magnetic field lines that extend out beyond and loop back into the Sun's surface. Such activity occurs every day and is part of the dynamism of the changing Sun. Credit: NASA/GSFC/STEREO To learn more about STEREO go to: soho.nascom.nasa.gov/home.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
Autonomous Robotic Inspection in Tunnels
NASA Astrophysics Data System (ADS)
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
2016-06-01
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
NASA Astrophysics Data System (ADS)
Su, Yanfeng; Cai, Zhijian; Liu, Quan; Lu, Yifan; Guo, Peiliang; Shi, Lingyan; Wu, Jianhong
2018-04-01
In this paper, an autostereoscopic three-dimensional (3D) display system based on synthetic hologram reconstruction is proposed and implemented. The system uses a single phase-only spatial light modulator to load the synthetic hologram of the left and right stereo images, and the parallax angle between two reconstructed stereo images is enlarged by a grating to meet the split angle requirement of normal stereoscopic vision. To realize the crosstalk-free autostereoscopic 3D display with high light utilization efficiency, the groove parameters of the grating are specifically designed by the rigorous coupled-wave theory for suppressing the zero-order diffraction, and then the zero-order nulled grating is fabricated by the holographic lithography and the ion beam etching. Furthermore, the diffraction efficiency of the fabricated grating is measured under the illumination of a laser beam with a wavelength of 532 nm. Finally, the experimental verification system for the proposed autostereoscopic 3D display is presented. The experimental results prove that the proposed system is able to generate stereoscopic 3D images with good performances.
Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors
NASA Technical Reports Server (NTRS)
Matthies, Larry; Grandjean, Pierrick
1993-01-01
Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.
Autonomous navigation and control of a Mars rover
NASA Technical Reports Server (NTRS)
Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.
1990-01-01
A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.
NASA Astrophysics Data System (ADS)
Hoefflinger, Bernd
Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.
Application of integral imaging autostereoscopic display to medical training equipment
NASA Astrophysics Data System (ADS)
Nagatani, Hiroyuki
2010-02-01
We applied an autostereoscopic display based on the integral imaging method (II method) to training equipment for medical treatment in an attempt to recover the binocular vision performance of strabismus or amblyopia (lazy eye) patients. This report summarizes the application method and results. The point of the training is to recognize the parallax using both eyes. The strabismus or amblyopia patients have to recognize the information on both eyes equally when they gaze at the display with parallax and perceive the stereo depth of the content. Participants in this interactive training engage actively with the image. As a result, they are able to revive their binocular visual function while playing a game. Through the training, the observers became able to recognize the amount of parallax correctly. In addition, the training level can be changed according to the eyesight difference between a right eye and a left eye. As a result, we ascertained that practical application of the II method for strabismus or amblyopia patients would be possible.
NASA Astrophysics Data System (ADS)
Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.
2010-03-01
A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.
A combined vision-inertial fusion approach for 6-DoF object pose estimation
NASA Astrophysics Data System (ADS)
Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.
2015-02-01
The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.
Local spatial frequency analysis for computer vision
NASA Technical Reports Server (NTRS)
Krumm, John; Shafer, Steven A.
1990-01-01
A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
NASA Technical Reports Server (NTRS)
2007-01-01
STEREO was able to capture bright loops in exquisite detail as they were arcing above an active region (May 26, 2007) over an 18 hour period. What we are actually seeing are charged particles spinning along magnetic field lines that extend above the Sun's surface. Active regions are areas of intense magnetic activity and often the source of solar storms. In fact, the clip ends with a flourish in which a small coronal mass ejection (CME) blows out into space. This is from the STEREO Ahead spacecraft at the 171 Angstroms wavelength in extreme ultraviolet light.
Panoramic 3d Vision on the ExoMars Rover
NASA Astrophysics Data System (ADS)
Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.
The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w.r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance of radiometric & spectral response in various spectral bands, fusion between HRC and WAC, super resolution, and again the requirement of quick yet robust processing will add some complexity to the ground processing system. • Visualization for Operations Planning: Efficient operations planning is directly related to an ergonomic and well performing visualization. It is intended to adapt existing tools to an integrated visualization solution for the purpose of scientific site characterization, view planning and reachability mapping/instrument placement of pointing sensors (including the panoramic imaging system itself), and selection of regions of interest. The main interfaces between the individual components as well as the first version of a user requirement document are currently under definition. Beside the support for sensor layout and calibration the 3d vision system will consist of 2-3 main modules to be used during ground processing & utilization of the ExoMars Rover panoramic imaging system. 3
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
A Structured Light Sensor System for Tree Inventory
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong; Zemek, Michael C.
2000-01-01
Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.
High Throughput System for Plant Height and Hyperspectral Measurement
NASA Astrophysics Data System (ADS)
Zhao, H.; Xu, L.; Jiang, H.; Shi, S.; Chen, D.
2018-04-01
Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV) extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.
Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations
NASA Technical Reports Server (NTRS)
Noyes, Matthew A.
2013-01-01
This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.
Autonomous unmanned air vehicles (UAV) techniques
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Lee, Ting N.
2007-04-01
The UAVs (Unmanned Air Vehicles) have great potentials in different civilian applications, such as oil pipeline surveillance, precision farming, forest fire fighting (yearly), search and rescue, boarder patrol, etc. The related industries of UAVs can create billions of dollars for each year. However, the road block of adopting UAVs is that it is against FAA (Federal Aviation Administration) and ATC (Air Traffic Control) regulations. In this paper, we have reviewed the latest technologies and researches on UAV navigation and obstacle avoidance. We have purposed a system design of Jittering Mosaic Image Processing (JMIP) with stereo vision and optical flow to fulfill the functionalities of autonomous UAVs.
Counter sniper: a localization system based on dual thermal imager
NASA Astrophysics Data System (ADS)
He, Yuqing; Liu, Feihu; Wu, Zheng; Jin, Weiqi; Du, Benfang
2010-11-01
Sniper tactics is widely used in modern warfare, which puts forward the urgent requirement of counter sniper detection devices. This paper proposed the anti-sniper detection system based on a dual-thermal imaging system. Combining the infrared characteristics of the muzzle flash and bullet trajectory of binocular infrared images obtained by the dual-infrared imaging system, the exact location of the sniper was analyzed and calculated. This paper mainly focuses on the system design method, which includes the structure and parameter selection. It also analyzes the exact location calculation method based on the binocular stereo vision and image analysis, and give the fusion result as the sniper's position.
NASA Technical Reports Server (NTRS)
2006-01-01
Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!
DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.
2010-01-01
During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844
NASA Astrophysics Data System (ADS)
Mercer, Jason J.; Westbrook, Cherie J.
2016-11-01
Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.
NASA Astrophysics Data System (ADS)
Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.
2015-05-01
Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Precise positioning method for multi-process connecting based on binocular vision
NASA Astrophysics Data System (ADS)
Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan
2016-01-01
With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.
DLP™-based dichoptic vision test system
NASA Astrophysics Data System (ADS)
Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli
2010-01-01
It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, R. L.
2007-01-01
We present observations from Hinode, STEREO, and TRACE of a solar filament eruption and flare that occurred on 2007 March 2. Data from the two new satellites, combined with the TRACE observations, give us fresh insights into the eruption onset process. HINODE/XRT shows soft X-ray (SXR) activity beginning approximately 30 minutes prior to ignition of bright flare loops. STEREO andTRACE images show that the filament underwent relatively slow motions coinciding with the pre-eruption SXR brightenings, and it underwent rapid eruptive motions beginning near the time of flare onset. Concurrent HINODE/SOT magnetograms showed substantial flux cancelation under the filament at the site of the pre-eruption SXR activity. From these observations we infer that progressive tether-cutting reconnection driven by photospheric convection caused the slow rise of the filament and led to its eruption. NASA supported this work through a NASA Heliosphysics GI grant.
Mapping and localization for extraterrestrial robotic explorations
NASA Astrophysics Data System (ADS)
Xu, Fengliang
In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
Multi-spacecraft observations of recurrent {sup 3}He-rich solar energetic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bučík, R.; Innes, D. E.; Mall, U.
2014-05-01
We study the origin of {sup 3}He-rich solar energetic particles (<1 MeV nucleon{sup –1}) that are observed consecutively on STEREO-B, Advanced Composition Explorer (ACE), and STEREO-A spacecraft when they are separated in heliolongitude by more than 90°. The {sup 3}He-rich period on STEREO-B and STEREO-A commences on 2011 July 1 and 2011 July 16, respectively. The ACE {sup 3}He-rich period consists of two sub-events starting on 2011 July 7 and 2011 July 9. We associate the STEREO-B July 1 and ACE July 7 {sup 3}He-rich events with the same sizeable active region (AR) producing X-ray flares accompanied by prompt electronmore » events, when it was near the west solar limb as seen from the respective spacecraft. The ACE July 9 and STEREO-A July 16 events were dispersionless with enormous {sup 3}He enrichment, lacking solar energetic electrons and occurring in corotating interaction regions. We associate these events with a small, recently emerged AR near the border of a low-latitude coronal hole that produced numerous jet-like emissions temporally correlated with type III radio bursts. For the first time we present observations of (1) solar regions with long-lasting conditions for {sup 3}He acceleration and (2) solar energetic {sup 3}He that is temporarily confined/re-accelerated in interplanetary space.« less
García-Lázaro, Santiago; Ferrer-Blasco, Teresa; Madrid-Costa, David; Albarrán-Diego, César; Montés-Micó, Robert
2015-01-01
To assess and compare the effects of four simultaneous-image multifocal contact lenses (SIMCLs), and those with distant-vision-only contact lenses on visual performance in early presbyopes, under dim conditions, including the effects of induced glare. In this double-masked crossover study design, 28 presbyopic subjects aged 40 to 46 years were included. All participants were fitted with the four different SIMCLs (Air Optix Aqua Multifocal [AOAM; Alcon], PureVision Multifocal [PM; Bausch & Lomb], Acuvue Oasys for Presbyopia [AOP; Johnson & Johnson Vision], and Biofinity Multifocal [BM; CooperVision]) and with monofocal contact lenses (Air Optix Aqua, Alcon). After 1 month of daily contact lens wearing, each subject's binocular distance visual acuity (BDVA) and binocular distance contrast sensitivity (BDCS) were measured using the Functional Visual Analyzer (Stereo Optical Co., Inc.) under mesopic conditions (3 candela [cd]/m) both with no glare and under the 2 levels of induced glare: 1.0 lux (glare 1) and 28 lux (glare 2). Among the SIMCLs, in terms of BDVA, AOAM and PM outperformed BM and AOP. All contact lenses performed better at level without glare, followed by Glare 1, and with the worst results obtained under glare 2. Binocular distance contrast sensitivity revealed statistically significant differences for 12 cycles per degree (cpd). Among the SIMCLs, post hoc multiple comparison testing revealed that AOAM and PM provided the best BDCS at the three luminance levels. In both cases, BDVA and BDCS at 12 cpd, monofocal contact lenses outperformed all SIMCL ones at all lighting conditions. Air Optix Aqua Multifocal and PM provided better visual performance than BM and AOP for distance vision with low addition and under dim conditions, but they all provide worse performance than monofocal contact lenses.
Gogate, Parikshit M; Sahasrabudhe, Mohini; Shah, Mitali; Patil, Shailbala; Kulkarni, Anil N; Trivedi, Rupal; Bhasa, Divya; Tamboli, Rahin; Mane, Rekha
2014-02-01
To study long term outcome of bilateral congenital and developmental cataract surgery. 258 pediatric cataract operated eyes of 129 children. Children who underwent pediatric cataract surgery in 2004-8 were traced and examined prospectively in 2010-11. Demographic and clinical factors were noted from retrospective chart readings. All children underwent visual acuity estimation and comprehensive ocular examination in a standardized manner. L. V. Prasad Child Vision Function scores (LVP-CVF) were noted for before and after surgery. Statistical analysis was done with SPSS version 16 including multi-variate analysis. Children aged 9.1 years (std dev 4.6, range 7 weeks-15 years) at the time of surgery. 74/129 (57.4%) were boys. The average duration of follow-up was 4.4 years (stddev 1.6, range 3-8 years). 177 (68.6%) eyes had vision <3/60 before surgery, while 109 (42.2%) had best corrected visual acuity (BCVA) >6/18 and 157 (60.9%) had BCVA >6/60 3-8 years after surgery. 48 (37.2%) had binocular stereoacuity <480 sec of arc by TNO test. Visual outcome depended on type of cataract (P = 0.004), type of cataract surgery (P < 0.001), type of intra-ocular lens (P = 0.05), age at surgery (P = 0.004), absence of post-operative uveitis (P = 0.01) and pre-operative vision (P < 0.001), but did not depend on delay (0.612) between diagnosis and surgery. There was a statistically significant improvement for all the 20 questions of the LVP-CVF scale (P < 0.001). Pediatric cataract surgery improved the children's visual acuity, stereo acuity and vision function. Developmental cataract, use of phacoemulsification, older children and those with better pre-operative vision had betterlong-termoutcomes.
Stereopsis testing without polarized glasses: a comparison study on five new stereoacuity tests.
Hatch, S W; Richman, J E
1994-09-01
Stereopsis testing is commonly used to assess the presence and level of binocular vision. A new series of stereopsis tests requiring no polarized goggles are available in the form of the Titmus Stereo Test, the Stereo Reindeer Test, the Random Dot Butterfly, the Random Dot Figures, and the Random E, Circle, Square. These polarized-free tests employ a special prismatic printing process creating a panagraphic presentation, i.e., a separate image is presented to each eye without the need for polarization. The purpose of this study was to compare the polarized-free stereo tests with their traditional polarized counterparts. Thirty four subjects, including several persons with strabismus, ages 10-35 years, were each tested with the polarized and polarized-free versions of the Titmus, Reindeer, Butterfly, and Figures. Twenty nine of these subjects were tested with the Random Dot E. Half the subjects were tested first with polarized-free and half were tested first with polarized tests. Tests were performed according to manufacturer instructions by the same examiner in clinical settings. The results (matched pair ranked correlation coefficients) indicate that the polarized-free tests were highly correlated (r = 0.997, r = 0.998, r = 0.997, r = 1.00, and r = 1.00 respectively) with the polarized comparison tests. No significant difference (Wilcoxon Ranked Sign) in the stereopsis level was obtained between the two versions of the tests. We conclude that these five polarized-free tests were just as valid in measuring the subjects' stereopsis as their traditional polarized version. The use of goggle-free testing has potential clinical advantages, e.g., testing of young children who will not wear the glasses or the improved observation of the ocular alignment during stereopsis testing.
NASA Astrophysics Data System (ADS)
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
A helmet mounted display to adapt the telerobotic environment to human vision
NASA Technical Reports Server (NTRS)
Tharp, Gregory; Liu, Andrew; Yamashita, Hitomi; Stark, Lawrence
1990-01-01
A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
DIAC object recognition system
NASA Astrophysics Data System (ADS)
Buurman, Johannes
1992-03-01
This paper describes the object recognition system used in an intelligent robot cell. It is used to recognize and estimate pose and orientation of parts as they enter the cell. The parts are mostly metal and consist of polyhedral and cylindrical shapes. The system uses feature-based stereo vision to acquire a wireframe of the observed part. Features are defined as straight lines and ellipses, which lead to a wireframe of straight lines and circular arcs (the latter using a new algorithm). This wireframe is compared to a number of wire frame models obtained from the CAD database. Experimental results show that image processing hardware and parallelization may add considerably to the speed of the system.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
NASA Astrophysics Data System (ADS)
Harrison, R. A.; Davies, J. A.; Barnes, D.; Byrne, J. P.; Perry, C. H.; Bothmer, V.; Eastwood, J. P.; Gallagher, P. T.; Kilpua, E. K. J.; Möstl, C.; Rodriguez, L.; Rouillard, A. P.; Odstrčil, D.
2018-05-01
We present a statistical analysis of coronal mass ejections (CMEs) imaged by the Heliospheric Imager (HI) instruments on board NASA's twin-spacecraft STEREO mission between April 2007 and August 2017 for STEREO-A and between April 2007 and September 2014 for STEREO-B. The analysis exploits a catalogue that was generated within the FP7 HELCATS project. Here, we focus on the observational characteristics of CMEs imaged in the heliosphere by the inner (HI-1) cameras, while following papers will present analyses of CME propagation through the entire HI fields of view. More specifically, in this paper we present distributions of the basic observational parameters - namely occurrence frequency, central position angle (PA) and PA span - derived from nearly 2000 detections of CMEs in the heliosphere by HI-1 on STEREO-A or STEREO-B from the minimum between Solar Cycles 23 and 24 to the maximum of Cycle 24; STEREO-A analysis includes a further 158 CME detections from the descending phase of Cycle 24, by which time communication with STEREO-B had been lost. We compare heliospheric CME characteristics with properties of CMEs observed at coronal altitudes, and with sunspot number. As expected, heliospheric CME rates correlate with sunspot number, and are not inconsistent with coronal rates once instrumental factors/differences in cataloguing philosophy are considered. As well as being more abundant, heliospheric CMEs, like their coronal counterparts, tend to be wider during solar maximum. Our results confirm previous coronagraph analyses suggesting that CME launch sites do not simply migrate to higher latitudes with increasing solar activity. At solar minimum, CMEs tend to be launched from equatorial latitudes, while at maximum, CMEs appear to be launched over a much wider latitude range; this has implications for understanding the CME/solar source association. Our analysis provides some supporting evidence for the systematic dragging of CMEs to lower latitude as they propagate outwards.
STEREO-IMPACT Education and Public Outreach: Sharing STEREO Science
NASA Astrophysics Data System (ADS)
Craig, N.; Peticolas, L. M.; Mendez, B. J.
2005-12-01
The Solar TErrestrial RElations Observatory (STEREO) is scheduled for launch in Spring 2006. STEREO will study the Sun with two spacecrafts in orbit around it and on either side of Earth. The primary science goal is to understand the nature and consequences of Coronal Mass Ejections (CMEs). Despite their importance, scientists don't fully understand the origin and evolution of CMEs, nor their structure or extent in interplanetary space. STEREO's unique 3-D images of the structure of CMEs will enable scientists to determine their fundamental nature and origin. We will discuss the Education and Public Outreach (E/PO) program for the In-situ Measurement of Particles And CME Transients (IMPACT) suite of instruments aboard the two crafts and give examples of upcoming activities, including NASA's Sun-Earth day events, which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. STEREO's connection to space weather lends itself to close partnerships with the Sun-Earth Connection Education Forum (SECEF), The Exploratorium, and UC Berkeley's Center for New Music and Audio Technologies to develop informal science programs for science centers, museum visitors, and the public in general. We will also discuss our teacher workshops locally in California and also at annual conferences such as those of the National Science Teachers Association. Such workshops often focus on magnetism and its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. The importance of partnerships and coordination in working in an instrument E/PO program that is part of a bigger NASA mission with many instrument suites and many PIs will be emphasized. The Education and Outreach Porgram is funded by NASA's SMD.
StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.
Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A
2017-10-15
Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Critical infrastructure monitoring using UAV imagery
NASA Astrophysics Data System (ADS)
Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos
2016-08-01
The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639
Experience of the ARGO autonomous vehicle
NASA Astrophysics Data System (ADS)
Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra
1998-07-01
This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Stereoacuity versus fixation disparity as indicators for vergence accuracy under prismatic stress.
Kromeier, Miriam; Schmitt, Christina; Bach, Michael; Kommerell, Guntram
2003-01-01
Fixation disparity has been widely used as an indicator for vergence accuracy under prismatic stress. However, the targets used for measuring fixation disparity contain artificial features in that the fusional contours are thinned out. We considered that stereoacuity might be a preferable indicator of vergence accuracy, as stereo targets represent natural viewing conditions. We measured fixation disparity with a computer adaptation of Ogle's test and stereoacuity with the automatic Freiburg Stereoacuity Test. Eight subjects were examined under increasing base-in and base-out prisms. The response of fixation disparity to prismatic stress revealed the curve types described by Ogle and Crone. All eight subjects reached a stereoscopic threshold below 10 arcsec. In seven subjects the stereoscopic threshold increased before double vision occurred. Our data suggest that stereoacuity is suitable to assess the range of binocular vision under prismatic stress. As stereoacuity bears the advantage over fixation disparity in that it can be measured without introducing artificial viewing conditions, we suggest exploring whether stereoacuity under prismatic stress would be more meaningful in the work-up of asthenopic patients than is fixation disparity.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.
Optimized stereo matching in binocular three-dimensional measurement system using structured light.
Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong
2014-09-10
In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.
Interactions between chromatic- and luminance-contrast-sensitive stereopsis mechanisms.
Simmons, David R; Kingdom, Frederick A A
2002-06-01
It is well known that chromatic information can assist in solving the stereo correspondence problem. It has also been suggested that there are two independent first-order stereopsis mechanisms, one sensitive to chromatic contrast and the other sensitive to luminance contrast (Vision Research 37 (1997) 1271). Could the effect of chromatic information on stereo correspondence be subserved by interactions between these mechanisms? To address this question, disparity thresholds (1/stereoacuity) were measured using 0.5 cpd Gabor patches. The stimuli possessed different relative amounts of chromatic and luminance contrast which could be correlated or anti-correlated between the eyes. Stereoscopic performance with these compound stimuli was compared to that with purely isoluminant and isochromatic stimuli at different contrasts. It was found that anti-correlated chromatic contrast severely disrupted stereopsis with achromatic stimuli and that anti-correlated luminance contrast severely disrupted stereopsis with chromatic stimuli. Less dramatic, but still significant, was the improvement in stereoacuity obtained using correlated colour and luminance contrast. These data are consistent with there being positive and negative interactions between chromatic and achromatic stereopsis mechanisms that take place after the initial encoding of disparity information, but before the extraction of stereoscopic depth. These interactions can be modelled satisfactorily assuming probability summation of depth sign information between independent mechanisms.
An improved three-dimension reconstruction method based on guided filter and Delaunay
NASA Astrophysics Data System (ADS)
Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.
Remotely Characterizing the Topographic and Thermal Evolution of Kīlauea's Lava Flow Field
NASA Astrophysics Data System (ADS)
Rumpf, M. E.; Vaughan, R. G.; Poland, M. P.
2017-12-01
New technologies in satellite data acquisition and the continuous development of analysis software capabilities are greatly improving the ability of scientists to monitor volcanoes in near-real-time. Satellite-based thermal infrared (TIR) data are used to monitor and analyze new and ongoing volcanic activity by identifying and quantifying surface thermal characteristics and lava flow discharge rates. Improved detector sensitivities provide unprecedented spatial detail in visible to shortwave infrared (VSWIR) satellite imagery. The acquisition of stereo and tri-stereo visible imagery, as well as SAR, by an increasing number of satellite systems enables the creation of digital elevation models (DEMs) at higher temporal frequencies and resolutions than in the past. Free, user-friendly software programs, such as NASA's Ames Stereo Pipeline and Google Earth Engine, ease the accessibility and usability of satellite data to users unfamiliar with traditional analysis techniques. An effective and efficient integration of these technologies can be utilized towards volcano monitoring.Here, we use the active lava flows from the East Rift Zone vents of Kīlauea Volcano, Hawai`i as a testing ground for developing new techniques in multi-sensor volcano remote sensing. We use DEMs generated from stereo and tri-stereo images captured by the WorldView3 and Pleiades satellite systems to assess topographic changes over time at the active flow fields. Time-series data of lava flow area, thickness, and discharge rate developed from thermal emission measurements collected by ASTER, Landsat 8, and WorldView3 are compared to satellite-detected topographic changes and to ground observations of flow development to identify behavioral patterns and to monitor flow field evolution. We explore methods of combining these visual and TIR data sets collected by multiple satellite systems with a variety of resolutions and repeat times. Our ultimate goal is to develop integrative tools for near-real-time volcano monitoring. In addition, we recommend improvements to future satellite mission capabilities (e.g., repeat times, resolutions) to improve lava flow monitoring techniques.
Duane's retraction syndrome: a retrospective review from Kathmandu, Nepal.
Shrestha, Gauri Shankar; Sharma, Ananda Kumar
2012-01-01
The aim was to study the clinical characteristics of Duane's retraction syndrome (DRS) in Nepalese patients. Medical records from 52 cases of DRS from May 2003 to April 2010 were retrospectively reviewed for age, gender, laterality and clinical characteristics. Forty-one case records (78.8 per cent) that had complete clinical findings were considered for further evaluation. Examination included visual acuity by Snellen chart, refraction, associated horizontal and vertical strabismus in primary gaze, upshoot and downshoot on attempted adduction, binocular vision assessed with the Worth four-dot test on adopted gaze and stereopsis examined with the Titmus stereo test. DRS type I was the most common type observed in 73.2 per cent of cases, followed by DRS type II (14.6 per cent) and DRS type III (12.2 per cent). It was more common in female patients (58.5 per cent) than male patients (χ(2) = 4.6, df = 1, p = 0.03). DRS was more common in the left eye (68.3 per cent) than the right eye and unilaterally present in 95.1 per cent of subjects. In primary gaze, orthotropia (41.5 per cent) was more common than exotropia (34.1 per cent) and esotropia (24.4 per cent) and vertical strabismus was present in 24.4 per cent of subjects. Upshoot and downshoot on attempted adduction was seen in 14.6 and 9.8 per cent, respectively. Binocular single vision was present in 68.3 per cent of subjects by Worth four-dot test at near. Stereopsis of 3,000 seconds of arc was present in 9.8 per cent, 100 to 200 seconds of arc in 14.6 per cent and 40 to 60 seconds of arc in 43.9 per cent with the Titmus stereo test. DRS is more common in female patients and the left eye. DRS type I is the most common type. © 2011 The Authors. Clinical and Experimental Optometry © 2011 Optometrists Association Australia.
A strongly goal-directed close-range vision system for spacecraft docking
NASA Technical Reports Server (NTRS)
Boyer, Kim L.; Goddard, Ralph E.
1991-01-01
In this presentation, we will propose a strongly goal-oriented stereo vision system to establish proper docking approach motions for automated rendezvous and capture (AR&C). From an input sequence of stereo video image pairs, the system produces a current best estimate of: contact position; contact vector; contact velocity; and contact orientation. The processing demands imposed by this particular problem and its environment dictate a special case solution; such a system should necessarily be, in some sense, minimalist. By this we mean the system should construct a scene description just sufficiently rich to solve the problem at hand and should do no more processing than is absolutely necessary. In addition, the imaging resolution should be just sufficient. Extracting additional information and constructing higher level scene representations wastes energy and computational resources and injects an unnecessary degree of complexity, increasing the likelihood of malfunction. We therefore take a departure from most prior stereopsis work, including our own, and propose a system based on associative memory. The purpose of the memory is to immediately associate a set of motor commands with a set of input visual patterns in the two cameras. That is, rather than explicitly computing point correspondences and object positions in world coordinates and trying to reason forward from this information to a plan of action, we are trying to capture the essence of reflex behavior through the action of associative memory. The explicit construction of point correspondences and 3D scene descriptions, followed by online velocity and point of impact calculations, is prohibitively expensive from a computational point of view for the problem at hand. Learned patterns on the four image planes, left and right at two discrete but closely spaced instants in time, will be bused directly to infer the spacecraft reaction. This will be a continuing online process as the docking collar approaches.
Modeling of Depth Cue Integration in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy
2003-01-01
Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Evidence for the need for vision screening of school children in Turkey.
Azizoğlu, Serap; Crewther, Sheila G; Şerefhan, Funda; Barutchu, Ayla; Göker, Sinan; Junghans, Barbara M
2017-12-02
In many countries, access to general health and eye care is related to an individual's socioeconomic status (SES). We aimed to examine the prevalence of oculo-visual disorders in children in Istanbul Turkey, drawn from schools at SES extremes but geographically nearby. Three school-based vision screenings (presenting distance visual acuity, cover test, eye assessment history, colour vision, gross stereopsis and non-cycloplegic autorefraction) were conducted on 81% of a potential 1014 primary-school children aged 4-10 years from two private (high SES) schools and a nearby government (low SES) school in central Istanbul. Prevalence of refractive errors and school-based differences were analysed using parametric statistics (ANOVA). The remaining oculo-visual aspects were compared using non-parametric tests. Of the 823 children with mean age 6.7 ± 2.2 years, approximately 10% were referred for a full eye examination (8.2% and 16.3% of private/government schools respectively). Vision had not been previously examined in nearly 22% of private school children and 65% of government school children. Of all children, 94.5% were able to accurately identify the 6/9.5 [LogMAR 0.2] line of letters/shapes with each eye and 86.6% the 6/6 line [LogMAR 0], while 7.9% presented wearing spectacles, 3.8% had impaired colour vision, 1.5% had grossly impaired stereo-vision, 1.5% exhibited strabismus, 1.8% were suspected to have amblyopia and 0.5% had reduced acuity of likely organic origin. Of the 804 without strabismus, amblyopia or organic conditions, 6.0% were myopic ≤ - 0.50DS, 0.6% hyperopic ≥ + 2.00DS, 7.7% astigmatic ≥1.00 DC and 6.2% anisometropic ≥1.00DS. The results highlight the need for general vision screenings for all children prior to school entry given the varied and different pattern of visual problems associated with lifestyle differences in two populations raised in the same urban locale but drawn from different socioeconomic backgrounds.
NASA Astrophysics Data System (ADS)
Schmitz, Nicole; Jaumann, Ralf; Coates, Andrew; Griffiths, Andrew; Hauber, Ernst; Trauthan, Frank; Paar, Gerhard; Barnes, Dave; Bauer, Arnold; Cousins, Claire
2010-05-01
Geologic context as a combination of orbital imaging and surface vision, including range, resolution, stereo, and multispectral imaging, is commonly regarded as basic requirement for remote robotic geology and forms the first tier of any multi-instrument strategy for investigating and eventually understanding the geology of a region from a robotic platform. Missions with objectives beyond a pure geologic survey, e.g. exobiology objectives, require goal-oriented operational procedures, where the iterative process of scientific observation, hypothesis, testing, and synthesis, performed via a sol-by-sol data exchange with a remote robot, is supported by a powerful vision system. Beyond allowing a thorough geological mapping of the surface (soil, rocks and outcrops) in 3D, using wide angle stereo imagery, such a system needs to be able to provide detailed visual information on targets of interest in high resolution, thereby enabling the selection of science targets and samples for further analysis with a specialized in-situ instrument suite. Surface vision for ESA's upcoming ExoMars rover will come from a dedicated Panoramic Camera System (PanCam). As integral part of the Pasteur payload package, the PanCam is designed to support the search for evidence of biological processes by obtaining wide angle multispectral stereoscopic panoramic images and high resolution RGB images from the mast of the rover [1]. The camera system will consist of two identical wide-angle cameras (WACs), which are arranged on a common pan-tilt mechanism, with a fixed stereo base length of 50 cm. The WACs are being complemented by a High Resolution Camera (HRC), mounted between the WACs, which allows a magnification of selected targets by a factor of ~8 with respect to the wide-angle optics. The high-resolution images together with the multispectral and stereo capabilities of the camera will be of unprecedented quality for the identification of water-related surface features (such as sedimentary rocks) and form one key to a successful implementation of ESA's multi-level strategy for the ExoMars Reference Surface Mission. A dedicated PanCam Science Implementation Strategy is under development, which connects the PanCam science objectives and needs of the ExoMars Surface Mission with the required investigations, planned measurement approach and sequence, and connected mission requirements. First step of this strategy is obtaining geological context to enable the decision where to send the rover. PanCam (in combination with Wisdom) will be used to obtain ground truth by a thorough geomorphologic mapping of the ExoMars rover's surroundings in near and far range in the form of (1) RGB or monochromatic full (i.e. 360°) or partial stereo panoramas for morphologic and textural information and stereo ranging, (2) mosaics or single images with partly or full multispectral coverage to assess the mineralogy of surface materials as well as their weathering state and possible past or present alteration processes and (3) small-scale high-resolution information on targets/features of interest, and distant or inaccessible sites. This general survey phase will lead to the identification of surface features like outcrops, ridges and troughs and the characterization of different rock and surface units based on their morphology, distribution, and spectral and physical properties. Evidence of water-bearing minerals, water-altered rocks or even water-lain sediments seen in the large-scale wide angle images will then allow for preselecting those targets/features considered relevant for detailed analysis and definition of their geologic context. Detailed characterization and, subsequently, selection of those preselected targets/features for further analysis will then be enabled by color high-resolution imagery, followed by the next tier of contact instruments to enable a decision on whether or not to acquire samples for further analysis. During the following drill/analysis phase, PanCam's High Resolution Camera will characterize the sample in the sample tray and observe the sample discharge into the Core Sample Transfer Mechanism. Key parts of this science strategy have been tested under laboratory conditions in two geology blind tests [2] and during two field test campaigns in Svalbard, using simulated mission conditions, an ExoMars representative Payload (ExoMars and MSL instrument breadboards), and Mars analog settings [3, 4]. The experiences gained are being translated into operational sequences, and, together with the science implementation strategy, form a first version of a PanCam Surface Operations plan. References: [1] Griffiths, A.D. et al. (2006) International Journal of Astrobiology 5 (3): 269-275, doi:10.1017/ S1473550406003387. [2] Pullan, D. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-514. [3] Schmitz, N. et al. (2009) Geophysical Research Abstracts, Vol. 11, EGU2009-10621-2. [4] Cousins, C. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-813.
NASA Astrophysics Data System (ADS)
Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.
2013-12-01
Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.
NASA Astrophysics Data System (ADS)
Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco
2008-02-01
The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D-Cinema's in Europe; b) the design challenges to fly the camera outside the ISS as opposed to ERB1 that was only meant to be used in the pressurized environment of the ISS; c) on-board stereo viewing on a stereo camera has been tackled in ERB1: trade offs between OLED and LCOS display technologies shall be presented; d) HD_SDI cameras versus USB2 or Firewire; e) the hardware compression ASIC solutions used to tackle the high data rate on-board; f) 3D geometry reconstruction: first attempts in reconstructing a computer model of the interior of the ISS starting form the stereo video available.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
FPGA-based real-time phase measuring profilometry algorithm design and implementation
NASA Astrophysics Data System (ADS)
Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng
2016-11-01
Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.
Feasibility study consisting of a review of contour generation methods from stereograms
NASA Technical Reports Server (NTRS)
Kim, C. J.; Wyant, J. C.
1980-01-01
A review of techniques for obtaining contour information from stereo pairs is given. Photogrammetric principles including a description of stereoscopic vision are presented. The use of conventional contour generation methods, such as the photogrammetric plotting technique, electronic correlator, and digital correlator are described. Coherent optical techniques for contour generation are discussed and compared to the electronic correlator. The optical techniques are divided into two categories: (1) image plane operation and (2) frequency plane operation. The description of image plane correlators are further divided into three categories: (1) image to image correlator, (2) interferometric correlator, and (3) positive negative transparencies. The frequency plane correlators are divided into two categories: (1) correlation of Fourier transforms, and (2) filtering techniques.
3D environment modeling and location tracking using off-the-shelf components
NASA Astrophysics Data System (ADS)
Luke, Robert H.
2016-05-01
The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.
Vision Examination Protocol for Archery Athletes Along With an Introduction to Sports Vision
Mohammadi, Seyed Farzad; Aghazade Amiri, Mohammad; Naderifar, Homa; Rakhshi, Elham; Vakilian, Banafsheh; Ashrafi, Elham; Behesht-Nejad, Amir-Houshang
2016-01-01
Introduction: Visual skills are one of the main pillars of intangible faculties of athletes that can influence their performance. Great number of vision tests used to assess the visual skills and it will be irrational to perform all vision tests for every sport. Objectives: The purpose of this protocol article is to present a relatively comprehensive battery of tests and assessments on static and dynamic aspects of sight which seems relevant to sports vision and introduce the most useful ones for archery. Materials and Methods: Through extensive review of the literature, visual skills and respective tests were listed; such as ‘visual acuity, ‘contrast sensitivity’, ‘stereo-acuity’, ‘ocular alignment’, and ‘eye dominance’. Athletes were defined as “elite” and “non-elite” category based on their past performance. Dominance was considered for eye and hand; binocular or monocular aiming was planned to be recorded. Illumination condition was defined as to simulate the real archery condition to the extent possible. The full cycle of examinations and their order for each athlete was sketched (and estimated to take 40 minutes). Protocol was piloted in an eye hospital. Female and male archers aged 18 - 38 years who practiced compound and recurve archery with a history of more than 6 months were included. Conclusions: We managed to select and design a customized examination protocol for archery (a sight-intensive and aiming type of sports), serving skill assessment and research purposes. Our definition for elite and non-elite athletes can help to define sports talent and devise skill development methods as we compare the performance of these two groups. In our pilot, we identified 8 “archery figures” (by hand dominance, eye dominance and binocularity) and highlighted the concept “congruence” (dominant hand and eye in the same side) in archery performance. PMID:27217923
NASA Technical Reports Server (NTRS)
Kaise,r Michael L.
2008-01-01
The twin STEREO spacecrafi, launched in October 2006, are in heliocentric orbits near 4 AU with one spacecraft (Ahead) leading Earth in its orbit around the Sun and the other (Behind) trailing Earth. As viewed from the Sun, the STEREO spacecraft are continually separating from one another at about 45 degrees per year with Earth biseding the angle. At present, th@spaser=raft are a bit more than 45 degrees apart, thus they are able to each 'vie@ ground the limb's of the Sun by about 23 degrees, corresponding to about 1.75 days of solar rotation. Both spameraft contain an identical set of instruments including an extreme ultraviolet imager, two white light coronagraphs, tws all-sky imagers, a wide selection of energetic particle detectors, a magnetometer and a radio burst tracker. A snapshot of the real time data is continually broadcast to NOW-managed ground stations and this small stream of data is immediately sent to the STEREO Science Center and converted into useful space weather data within 5 minutes of ground receipt. The resulting images, particle, magnetometer and radio astronomy plots are available at j g i t , : gAs timqe conting ues ijnto . g solar cycle 24, the separation angle becomes 90 degrees in early 2009 and 180 degrees in early 201 1 as the activity heads toward maximum. By the time of solar maximum, STEREO will provide for the first time a view of the entire Sun with the mronagraphs and e*reme ultraviolet instruments. This view wilt allow us to follow the evolution of active regions continuously and also detect new active regions long before they pose a space weather threat to Earth. The in situ instruments will be able to provide about 7 days advanced notice of co-rotating structures in the solar wind. During this same intewal near solar maximum, the wide-angle imagers on STEREB will both be ;able to view EarlCP-dirsted CMEs in their plane-oPsky. When combined with Eat-lhorbiting assets available at that time, it seems solar cycle 24 will mark a great increase in our ability to understand and predict space weather.
Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets
NASA Astrophysics Data System (ADS)
Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.
2017-05-01
Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo
2018-06-15
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
Spatial-frequency dependent binocular imbalance in amblyopia
Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C.; Bex, Peter J.
2015-01-01
While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p < 0.01). Good test-retest reliability of the method was demonstrated by the Bland-Altman plot. Our findings suggest that spatial-frequency dependent binocular imbalance may be useful for diagnosing amblyopia and as an outcome measure for recovery of binocular vision following therapy. PMID:26603125
Spatial-frequency dependent binocular imbalance in amblyopia.
Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C; Bex, Peter J
2015-11-25
While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p < 0.01). Good test-retest reliability of the method was demonstrated by the Bland-Altman plot. Our findings suggest that spatial-frequency dependent binocular imbalance may be useful for diagnosing amblyopia and as an outcome measure for recovery of binocular vision following therapy.
Vision Based Obstacle Detection in Uav Imaging
NASA Astrophysics Data System (ADS)
Badrloo, S.; Varshosaz, M.
2017-08-01
Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.
Ageing vision and falls: a review.
Saftari, Liana Nafisa; Kwon, Oh-Sang
2018-04-23
Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes
Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel
2015-01-01
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.
Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel
2015-04-20
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.
NASA Astrophysics Data System (ADS)
Beltrame, Francesco; Diaspro, Alberto; Fato, Marco; Martin, I.; Ramoino, Paola; Sobel, Irwin E.
1995-03-01
Confocal microscopy systems can be linked to 3D data oriented devices for the interactive navigation of the operator through a 3D object space. Sometimes, such environments are named `virtual reality' or `augmented reality' systems. We consider optical confocal laser scanning microscopy images, in fluorescence with various excitations and emissions, and versus time The aim of our study has been the quantitative spatial analysis of confocal data using the false-color composition technique. Starting from three 2D confocal fluorescent images at the same slice location in a given biological specimen, a new single image representation of all three parameters has been generated by the false-color technique on a HP 9000/735 workstation, connected to the confocal microscope. The color composite result of the mapping of the three parameters is displayed using a resolution of 24 bits per pixel. The operator may independently vary the mix of each of the three components in the false-color composite via three (R, G, B) mixing sliders. Furthermore, by using the pixel data in the three fluorescent component images, a 3D space containing the density distribution of these three parameters has been constructed. The histogram has been displayed in stereo: it can be used for clustering purposes from the operator, through an original thresholding algorithm.
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.
DeSouza, Guilherme N; Kak, Avinash C
2004-10-01
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."
NASA Astrophysics Data System (ADS)
Lu, Zhong-Lin; Sperling, George
2002-10-01
Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America
Signatures of Slow Solar Wind Streams from Active Regions in the Inner Corona
NASA Astrophysics Data System (ADS)
Slemzin, V.; Harra, L.; Urnov, A.; Kuzin, S.; Goryaev, F.; Berghmans, D.
2013-08-01
The identification of solar-wind sources is an important question in solar physics. The existing solar-wind models ( e.g., the Wang-Sheeley-Arge model) provide the approximate locations of the solar wind sources based on magnetic field extrapolations. It has been suggested recently that plasma outflows observed at the edges of active regions may be a source of the slow solar wind. To explore this we analyze an isolated active region (AR) adjacent to small coronal hole (CH) in July/August 2009. On 1 August, Hinode/EUV Imaging Spectrometer observations showed two compact outflow regions in the corona. Coronal rays were observed above the active-region coronal hole (ARCH) region on the eastern limb on 31 July by STEREO-A/EUVI and at the western limb on 7 August by CORONAS- Photon/TESIS telescopes. In both cases the coronal rays were co-aligned with open magnetic-field lines given by the potential field source surface model, which expanded into the streamer. The solar-wind parameters measured by STEREO-B, ACE, Wind, and STEREO-A confirmed the identification of the ARCH as a source region of the slow solar wind. The results of the study support the suggestion that coronal rays can represent signatures of outflows from ARs propagating in the inner corona along open field lines into the heliosphere.
Suryakumar, Rajaraman; Meyers, Jason P; Irving, Elizabeth L; Bobier, William R
2007-01-01
Accommodation and vergence are two ocular motor systems that interact during binocular vision. Independent measurement of the response dynamics of each system has been achieved by the application of optometers and eye trackers. However, relatively few devices, typically earlier model optometers, allow the simultaneous assessment of accommodation and vergence. In this study we describe the development and application of a custom designed high-speed digital photorefractor that allows for rapid measures of accommodation (up to 75Hz). In addition the photorefractor was also synchronized with a video-based stereo eye tracker to allow a simultaneous measurement of accommodation and vergence. Analysis of accommodation and vergence could then be conducted offline. The new instrumentation is suitable for investigation of young children and could be potentially used for clinical populations.
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
A modification of the fusion model for log polar coordinates
NASA Technical Reports Server (NTRS)
Griswold, N. C.; Weiman, Carl F. R.
1990-01-01
The fusion mechanism for application in stereo analysis of range restricted the depth of field and therefore required a shift variant mechanism in the peripheral area to find disparity. Misregistration was prevented by restricting the disparity detection range to a neighborhood spanned by the directional edge detection filters. This transformation was essentially accomplished by a nonuniform resampling of the original image in a horizontal direction. While this is easily implemented for digital processing, the approach does not (in the peripheral vision area) model the log-conformal mapping which is known to occur in the human mechanism. This paper therefore modifies the original fusion concept in the peripheral area to include the polar exponential grid-to-log conformal tesselation. Examples of the fusion process resulting in accurate disparity values are given.
Semantic segmentation of 3D textured meshes for urban scene analysis
NASA Astrophysics Data System (ADS)
Rouhani, Mohammad; Lafarge, Florent; Alliez, Pierre
2017-01-01
Classifying 3D measurement data has become a core problem in photogrammetry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and to account for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework.
Surface Location In Scene Content Analysis
NASA Astrophysics Data System (ADS)
Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.
1981-12-01
The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.
Generalized parallel-perspective stereo mosaics from airborne video.
Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M
2004-02-01
In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.
On 2013 April 11 active region 11719 was centered just west of the central meridian; at 06:55 UT, it erupted with an M6.5 X-ray flare and a moderately fast (∼800 km s{sup –1}) coronal mass ejection. This solar activity resulted in the acceleration of energetic ions to produce a solar energetic particle (SEP) event that was subsequently observed in energetic protons by both ACE and the two STEREO spacecraft. Heavy ions at energies ≥10 MeV nucleon{sup –1} were well measured by SEP sensors on ACE and STEREO-B, allowing the longitudinal dependence of the event composition to be studied. Both spacecraftmore » observed significant enhancements in the Fe/O ratio at 12-33 MeV nucleon{sup –1}, with the STEREO-B abundance ratio (Fe/O = 0.69) being similar to that of the large, Fe-rich SEP events observed in solar cycle 23. The footpoint of the magnetic field line connected to the ACE spacecraft was longitudinally farther from the flare site (77° versus 58°), and the measured Fe/O ratio at ACE was 0.48, 44% lower than at STEREO-B but still enhanced by more than a factor of 3.5 over average SEP abundances. Only upper limits were obtained for the {sup 3}He/{sup 4}He abundance ratio at both spacecraft. Low upper limits of 0.07% and 1% were obtained from the ACE sensors at 0.5-2 and 6.5-11.3 MeV nucleon{sup –1}, respectively, whereas the STEREO-B sensor provided an upper limit of 4%. These characteristics of high, but longitudinally variable, Fe/O ratios and low {sup 3}He/{sup 4}He ratios are not expected from either the direct flare contribution scenario or the remnant flare suprathermal material theory put forth to explain the Fe-rich SEP events of cycle 23.« less
NASA Technical Reports Server (NTRS)
Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji
2011-01-01
It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.
Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration
Chen, Shoubin; Liu, Jingbin; Huang, Wenchao
2018-01-01
The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540
Apollo 12 stereo view of lunar surface upon which astronaut had stepped
1969-11-20
AS12-57-8448 (19-20 Nov. 1969) --- An Apollo 12 stereo view showing a three-inch square of the lunar surface upon which an astronaut had stepped. Taken during extravehicular activity of astronauts Charles Conrad Jr. and Alan L. Bean, the exposure of the boot imprint was made with an Apollo 35mm stereo close-up camera. The camera was developed to get the highest possible resolution of a small area. The three-inch square is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. While astronauts Conrad and Bean descended in their Apollo 12 Lunar Module to explore the lunar surface, astronaut Richard F. Gordon Jr. remained with the Command and Service Modules in lunar orbit.
NASA Technical Reports Server (NTRS)
Kersten, K.; Cattell, C. A.; Breneman, A.; Goetz, K.; Kellogg, P. J.; Wygant, J. R.; Wilson, L. B., III; Blake, J. B.; Looper, M. D.; Roth, I.
2011-01-01
We present multi-satellite observations of large amplitude radiation belt whistler-mode waves and relativistic electron precipitation. On separate occasions during the Wind petal orbits and STEREO phasing orbits, Wind and STEREO recorded intense whistler-mode waves in the outer nightside equatorial radiation belt with peak-to-peak amplitudes exceeding 300 mV/m. During these intervals of intense wave activity, SAMPEX recorded relativistic electron microbursts in near magnetic conjunction with Wind and STEREO. This evidence of microburst precipitation occurring at the same time and at nearly the same magnetic local time and L-shell with a bursty temporal structure similar to that of the observed large amplitude wave packets suggests a causal connection between the two phenomena. Simulation studies corroborate this idea, showing that nonlinear wave.particle interactions may result in rapid energization and scattering on timescales comparable to those of the impulsive relativistic electron precipitation.
Probabilistic fusion of stereo with color and contrast for bilayer segmentation.
Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten
2006-09-01
This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.
A search for Ganymede stereo images and 3D mapping opportunities
NASA Astrophysics Data System (ADS)
Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.
2017-10-01
We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.
Solar Eclipse Video Captured by STEREO-B
NASA Technical Reports Server (NTRS)
2007-01-01
No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.
STEREO Space Weather and the Space Weather Beacon
NASA Technical Reports Server (NTRS)
Biesecker, D. A.; Webb, D F.; SaintCyr, O. C.
2007-01-01
The Solar Terrestrial Relations Observatory (STEREO) is first and foremost a solar and interplanetary research mission, with one of the natural applications being in the area of space weather. The obvious potential for space weather applications is so great that NOAA has worked to incorporate the real-time data into their forecast center as much as possible. A subset of the STEREO data will be continuously downlinked in a real-time broadcast mode, called the Space Weather Beacon. Within the research community there has been considerable interest in conducting space weather related research with STEREO. Some of this research is geared towards making an immediate impact while other work is still very much in the research domain. There are many areas where STEREO might contribute and we cannot predict where all the successes will come. Here we discuss how STEREO will contribute to space weather and many of the specific research projects proposed to address STEREO space weather issues. We also discuss some specific uses of the STEREO data in the NOAA Space Environment Center.
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
Sampling artifacts in perspective and stereo displays
NASA Astrophysics Data System (ADS)
Pfautz, Jonathan D.
2001-06-01
The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.
An Evaluation of the Effectiveness of Stereo Slides in Teaching Geomorphology.
ERIC Educational Resources Information Center
Giardino, John R.; Thornhill, Ashton G.
1984-01-01
Provides information about producing stereo slides and their use in the classroom. Describes an evaluation of the teaching effectiveness of stereo slides using two groups of 30 randomly selected students from introductory geomorphology. Results from a pretest/postttest measure show that stereo slides significantly improved understanding. (JM)
NASA Astrophysics Data System (ADS)
Seryotkin, Yu. V.; Bakakin, V. V.; Likhacheva, A. Yu.; Dementiev, S. N.; Rashchenko, S. V.
2017-10-01
The structural evolution of Tl-exchanged natrolite with idealized formula Tl2[Al2Si3O10]·2H2O, compressed in penetrating (water:ethanol 1:1) and non-penetrating (paraffin) media, was studied up to 4 GPa. The presence of Tl+ with non-bonded electron lone pairs, which can be either stereo-chemically active or passive, determines distinctive features of the high-pressure behavior of the Tl-form. The effective volume of assemblages Tl+(O,H2O) n depends on the E-pairs activity: single-sided coordination correlates with smaller volumes. At ambient conditions, there are two types of Tl positions, only one of them having a nearly single-sided coordination as a characteristic of stereo-activity of the Tl+ E pair. Upon the compression in paraffin, a phase transition occurs: a 5% volume contraction of flexible natrolite framework is accompanied by the conversion of all the Tl+ cations into stereo-chemically active state with a single-sided coordination. This effect requires the reconstruction of all the extra-framework subsystems with the inversion of the cation and H2O positions. The compression in water-containing medium leads to the increase of H2O content up to three molecules pfu through the filling of partly vacant positions. This hinders a single-sided coordination of Tl ions and preserves the configuration of their ion-molecular subsystem. It is likely that the extra-framework subsystem is responsible for the super-structure modulation.
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Nakagawa, Toshiaki; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi
2011-09-01
Early diagnosis of glaucoma, which is the second leading cause of blindness in the world, can halt or slow the progression of the disease. We propose an automated method for analyzing the optic disc and measuring the cup-to-disc ratio (CDR) on stereo retinal fundus images to improve ophthalmologists' diagnostic efficiency and potentially reduce the variation on the CDR measurement. The method was developed using 80 retinal fundus image pairs, including 25 glaucomatous, and 55 nonglaucomatous eyes, obtained at our institution. A disc region was segmented using the active contour method with the brightness and edge information. The segmentation of a cup region was performed using a depth map of the optic disc, which was reconstructed on the basis of the stereo disparity. The CDRs were measured and compared with those determined using the manual segmentation results by an expert ophthalmologist. The method was applied to a new database which consisted of 98 stereo image pairs including 60 and 30 pairs with and without signs of glaucoma, respectively. Using the CDRs, an area under the receiver operating characteristic curve of 0.90 was obtained for classification of the glaucomatous and nonglaucomatous eyes. The result indicates potential usefulness of the automated determination of CDRs for the diagnosis of glaucoma.
Motorcycle detection and counting using stereo camera, IR camera, and microphone array
NASA Astrophysics Data System (ADS)
Ling, Bo; Gibson, David R. P.; Middleton, Dan
2013-03-01
Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.
A STEREO Survey of Magnetic Cloud Coronal Mass Ejections Observed at Earth in 2008–2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Brian E.; Wu, Chin-Chun; Howard, Russell A.
We identify coronal mass ejections (CMEs) associated with magnetic clouds (MCs) observed near Earth by the Wind spacecraft from 2008 to mid-2012, a time period when the two STEREO spacecraft were well positioned to study Earth-directed CMEs. We find 31 out of 48 Wind MCs during this period can be clearly connected with a CME that is trackable in STEREO imagery all the way from the Sun to near 1 au. For these events, we perform full 3D reconstructions of the CME structure and kinematics, assuming a flux rope (FR) morphology for the CME shape, considering the full complement ofmore » STEREO and SOHO imaging constraints. We find that the FR orientations and sizes inferred from imaging are not well correlated with MC orientations and sizes inferred from the Wind data. However, velocities within the MC region are reproduced reasonably well by the image-based reconstruction. Our kinematic measurements are used to provide simple prescriptions for predicting CME arrival times at Earth, provided for a range of distances from the Sun where CME velocity measurements might be made. Finally, we discuss the differences in the morphology and kinematics of CME FRs associated with different surface phenomena (flares, filament eruptions, or no surface activity).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Ryun-Young; Ofman, Leon; Kramar, Maxim
2013-03-20
We report white-light observations of a fast magnetosonic wave associated with a coronal mass ejection observed by STEREO/SECCHI/COR1 inner coronagraphs on 2011 August 4. The wave front is observed in the form of density compression passing through various coronal regions such as quiet/active corona, coronal holes, and streamers. Together with measured electron densities determined with STEREO COR1 and Extreme UltraViolet Imager (EUVI) data, we use our kinematic measurements of the wave front to calculate coronal magnetic fields and find that the measured speeds are consistent with characteristic fast magnetosonic speeds in the corona. In addition, the wave front turns outmore » to be the upper coronal counterpart of the EIT wave observed by STEREO EUVI traveling against the solar coronal disk; moreover, stationary fronts of the EIT wave are found to be located at the footpoints of deflected streamers and boundaries of coronal holes, after the wave front in the upper solar corona passes through open magnetic field lines in the streamers. Our findings suggest that the observed EIT wave should be in fact a fast magnetosonic shock/wave traveling in the inhomogeneous solar corona, as part of the fast magnetosonic wave propagating in the extended solar corona.« less
NASA Astrophysics Data System (ADS)
Gopalswamy, N.; Makela, P.; Yashiro, S.; Davila, J. M.
2012-08-01
It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009a). The STEREO spacecraft were in qudrature with SOHO (STEREO-A ahead of Earth by 87oand STEREO-B 94obehind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp) and radial speed (Vrad) derived previously from geometrical considerations (Gopalswamy et al. 2009a): Vrad=1/2 (1 + cot w)Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 7 6o, so w=3 8o. This gives the relation as Vrad=1.1 4 Vexp. From LASCO observations, we measured Vexp=897 km/s, so we get the radial speed as 10 2 3 km/s. Direct measurement of radial speed yields 945 km/s (STEREO-A) and 105 8 km/s (STEREO-B). These numbers are different only by 7.6 % and 3.4 % (for STEREO-A and STEREO-B, respectively) from the computed value.
Stereoscopy for visual simulation of materials of complex appearance
NASA Astrophysics Data System (ADS)
da Graça, Fernando; Paljic, Alexis; Lafon-Pham, Dominique; Callet, Patrick
2014-03-01
The present work studies the role of stereoscopy on perceived surface aspect of computer generated complex materials. The objective is to investigate if, and how, the additional information conveyed by the binocular vision affects the observer judgment on the evaluation of flake density in an effect paint simulation. We have set up a heuristic flake model with a Voronoi: modelization of flakes. The model was implemented in our rendering engine using global illumination, ray tracing, with an off axis-frustum method for the calculation of stereo images. We conducted a user study based on a flake density discrimination task to determine perception thresholds (JNDs). Results show that stereoscopy slightly improves density perception. We propose an analysis methodology based on granulometry. This allows for a discussion of the results on the basis of scales of observation.
An HTML Tool for Production of Interactive Stereoscopic Compositions.
Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi
2016-12-01
The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.
Retinal isomerization in bacteriorhodopsin captured by a femtosecond x-ray laser.
Nogly, Przemyslaw; Weinert, Tobias; James, Daniel; Carbajo, Sergio; Ozerov, Dmitry; Furrer, Antonia; Gashi, Dardan; Borin, Veniamin; Skopintsev, Petr; Jaeger, Kathrin; Nass, Karol; Båth, Petra; Bosman, Robert; Koglin, Jason; Seaberg, Matthew; Lane, Thomas; Kekilli, Demet; Brünle, Steffen; Tanaka, Tomoyuki; Wu, Wenting; Milne, Christopher; White, Thomas; Barty, Anton; Weierstall, Uwe; Panneels, Valerie; Nango, Eriko; Iwata, So; Hunter, Mark; Schapiro, Igor; Schertler, Gebhard; Neutze, Richard; Standfuss, Jörg
2018-06-14
Ultrafast isomerization of retinal is the primary step in photoresponsive biological functions including vision in humans and ion-transport across bacterial membranes. We studied the sub-picosecond structural dynamics of retinal isomerization in the light-driven proton pump bacteriorhodopsin using an x-ray laser. A series of structural snapshots with near-atomic spatial and temporal resolution in the femtosecond regime show how the excited all- trans retinal samples conformational states within the protein binding pocket prior to passing through a twisted geometry and emerging in the 13 -cis conformation. Our findings suggest ultrafast collective motions of aspartic acid residues and functional water molecules in the proximity of the retinal Schiff base as a key ingredient for this stereo-selective and efficient photochemical reaction. Copyright © 2018, American Association for the Advancement of Science.
Stereo 3D vision adapter using commercial DIY goods
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Ohara, Takashi
2009-10-01
The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.
A strategic map for high-impact virtual experience design
NASA Astrophysics Data System (ADS)
Faste, Haakon; Bergamasco, Massimo
2009-02-01
We have employed methodologies of human centered design to inspire and guide the engineering of a definitive low-cost aesthetic multimodal experience intended to stimulate cultural growth. Using a combination of design research, trend analysis and the programming of immersive virtual 3D worlds, over 250 innovative concepts have been brainstormed, prototyped, evaluated and refined. These concepts have been used to create a strategic map for the development of highimpact virtual art experiences, the most promising of which have been incorporated into a multimodal environment programmed in the online interactive 3D platform XVR. A group of test users have evaluated the experience as it has evolved, using a multimodal interface with stereo vision, 3D audio and haptic feedback. This paper discusses the process, content, results, and impact on our engineering laboratory that this research has produced.
Photogrammetric Processing Using ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Kornus, W.; Magariños, A.; Pla, M.; Soler, E.; Perez, F.
2015-03-01
This paper evaluates the stereoscopic capacities of the Chinese sensor ZiYuan-3 (ZY-3) for the generation of photogrammetric products. The satellite was launched on January 9, 2012 and carries three high-resolution panchromatic cameras viewing in forward (22º), nadir (0º) and backward direction (-22º) and an infrared multi-spectral scanner (IRMSS), which is slightly looking forward (6º). The ground sampling distance (GSD) is 2.1m for the nadir image, 3.5m for the two oblique stereo images and 5.8m for the multispectral image. The evaluated ZY-3 imagery consists of a full set of threefold-stereo and a multi-spectral image covering an area of ca. 50km x 50km north-west of Barcelona, Spain. The complete photogrammetric processing chain was executed including image orientation, the generation of a digital surface model (DSM), radiometric image correction, pansharpening, orthoimage generation and digital stereo plotting. All 4 images are oriented by estimating affine transformation parameters between observed and nominal RPC (rational polynomial coefficients) image positions of 17 ground control points (GCP) and a subsequent calculation of refined RPC. From 10 independent check points RMS errors of 2.2m, 2.0m and 2.7m in X, Y and H are obtained. Subsequently, a DSM of 5m grid spacing is generated fully automatically. A comparison with the Lidar data results in an overall DSM accuracy of approximately 3m. In moderate and flat terrain higher accuracies in the order of 2.5m and better are achieved. In a next step orthoimages from the high resolution nadir image and the multispectral image are generated using the refined RPC geometry and the DSM. After radiometric corrections a fused high resolution colour orthoimage with 2.1m pixel size is created using an adaptive HSL method. The pansharpen process is performed after the individual geocorrection due to the different viewing angles between the two images. In a detailed analysis of the colour orthoimage artifacts are detected covering an area of 4691ha, corresponding to less than 2% of the imaged area. Most of the artifacts are caused by clouds (4614ha). A minor part (77ha) is affected by colour patch, stripping or blooming effects. For the final qualitative analysis on the usability of the ZY-3 imagery for stereo plotting purposes stereo combinations of the nadir and an oblique image are discarded, mainly due to the different pixel size, which produces difficulties in the stereoscopic vision and poor accuracy in positioning and measuring. With the two oblique images a level of detail equivalent to 1:25.000 scale is achieved for transport network, hydrography, vegetation and elements to model the terrain as break lines. For settlement, including buildings and other constructions a lower level of detail is achieved equivalent to 1:50.000 scale.
Photogrammetric Analysis of Rotor Clouds Observed during T-REX
NASA Astrophysics Data System (ADS)
Romatschke, U.; Grubišić, V.
2017-12-01
Stereo photogrammetric analysis is a rarely utilized but highly valuable tool for studying smaller, highly ephemeral clouds. In this study, we make use of data that was collected during the Terrain-induced Rotor Experiment (T-REX), which took place in Owens Valley, eastern California, in the spring of 2006. The data set consists of matched digital stereo photographs obtained at high temporal (on the order of seconds) and spatial resolution (limited by the pixel size of the cameras). Using computer vision techniques we have been able to develop algorithms for camera calibration, automatic feature matching, and ultimately reconstruction of 3D cloud scenes. Applying these techniques to images from different T-REX IOPs we capture the motion of clouds in several distinct mountain wave scenarios ranging from short lived lee wave clouds on an otherwise clear sky day to rotor clouds formed in an extreme turbulence environment with strong winds and high cloud coverage. Tracking the clouds in 3D space and time allows us to quantify phenomena such as vertical and horizontal movement of clouds, turbulent motion at the upstream edge of rotor clouds, the structure of the lifting condensation level, extreme wind shear, and the life cycle of clouds in lee waves. When placed into context with the existing literature that originated from the T-REX field campaign, our results complement and expand our understanding of the complex dynamics observed in a variety of different lee wave settings.
Tu, Junchao; Zhang, Liyan
2018-01-12
A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.
Using Stereo Vision to Support the Automated Analysis of Surveillance Videos
NASA Astrophysics Data System (ADS)
Menze, M.; Muhle, D.
2012-07-01
Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.
Finger tracking for hand-held device interface using profile-matching stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau
2013-01-01
Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.
Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.
2016-01-01
Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178
A novel device for accurate and efficient testing for vision-threatening diabetic retinopathy.
Maa, April Y; Feuer, William J; Davis, C Quentin; Pillow, Ensa K; Brown, Tara D; Caywood, Rachel M; Chasan, Joel E; Fransen, Stephen R
2016-04-01
To evaluate the performance of the RETeval device, a handheld instrument using flicker electroretinography (ERG) and pupillography on undilated subjects with diabetes, to detect vision-threatening diabetic retinopathy (VTDR). Performance was measured using a cross-sectional, single armed, non-interventional, multi-site study with Early Treatment Diabetic Retinopathy Study 7-standard field, stereo, color fundus photography as the gold standard. The 468 subjects were randomized to a calibration phase (80%), whose ERG and pupillary waveforms were used to formulate an equation correlating with the presence of VTDR, and a validation phase (20%), used to independently validate that equation. The primary outcome was the prevalence-corrected area under the receiver operating characteristic (ROC) curve for the detection of VTDR. The area under the ROC curve was 0.86 for VTDR. With a sensitivity of 83%, the specificity was 78% and the negative predictive value was 99%. The average testing time was 2.3 min. With a VTDR prevalence similar to that in the U.S., the RETeval device will identify about 75% of the population as not having VTDR with 99% accuracy. The device is simple to use, does not require pupil dilation, and has a short testing time. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Dimensional coordinate measurements: application in characterizing cervical spine motion
NASA Astrophysics Data System (ADS)
Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan
2014-06-01
Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.
Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen
2017-09-04
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude
Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander
2016-01-01
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
NASA Astrophysics Data System (ADS)
Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.
2006-12-01
STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Also, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross-spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.
NASA Technical Reports Server (NTRS)
2007-01-01
There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2008-01-01
This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.
Neukum, G; Jaumann, R; Hoffmann, H; Hauber, E; Head, J W; Basilevsky, A T; Ivanov, B A; Werner, S C; van Gasselt, S; Murray, J B; McCord, T
2004-12-23
The large-area coverage at a resolution of 10-20 metres per pixel in colour and three dimensions with the High Resolution Stereo Camera Experiment on the European Space Agency Mars Express Mission has made it possible to study the time-stratigraphic relationships of volcanic and glacial structures in unprecedented detail and give insight into the geological evolution of Mars. Here we show that calderas on five major volcanoes on Mars have undergone repeated activation and resurfacing during the last 20 per cent of martian history, with phases of activity as young as two million years, suggesting that the volcanoes are potentially still active today. Glacial deposits at the base of the Olympus Mons escarpment show evidence for repeated phases of activity as recently as about four million years ago. Morphological evidence is found that snow and ice deposition on the Olympus construct at elevations of more than 7,000 metres led to episodes of glacial activity at this height. Even now, water ice protected by an insulating layer of dust may be present at high altitudes on Olympus Mons.
Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romps, David; Oktem, Rusen
2017-10-31
The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less
Development of a stereo 3-D pictorial primary flight display
NASA Technical Reports Server (NTRS)
Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille
1989-01-01
Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.
Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC
NASA Astrophysics Data System (ADS)
Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.
NASA Astrophysics Data System (ADS)
Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.
1990-10-01
Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.
The research of edge extraction and target recognition based on inherent feature of objects
NASA Astrophysics Data System (ADS)
Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo
2008-03-01
Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Estimation of permanent noise-induced hearing loss in an urban setting.
Lewis, Ryan C; Gershon, Robyn R M; Neitzel, Richard L
2013-06-18
The potential burden of noise-induced permanent threshold shift (NIPTS) in U.S. urban settings is not well-characterized. We used ANSI S3.44-1996 to estimate NIPTS for a sample of 4585 individuals from New York City (NYC) and performed a forward stepwise logistic regression analysis to identify predictors of NIPTS >10 dB. The average individual is projected to develop a small NIPTS when averaged across 1000-4000 Hz for 1- to 20-year durations. For some individuals, NIPTS is expected to be substantial (>25 dB). At 4000 Hz, a greater number of individuals are at risk of NIPTS from MP3 players and stereos, but risk for the greatest NIPTS is for those with high occupational and episodic nonoccupational (e.g., power tool use) exposures. Employment sector and time spent listening to MP3 players and stereos and participating in episodic nonoccupational activities associated with excessive noise levels increased the odds of NIPTS >10 dB at 4000 Hz for 20-year durations. Our results indicate that the risk of NIPTS may be substantial for NYC and perhaps other urban settings. Noise exposures from "noisy" occupational and episodic nonoccupational activities and MP3 players and stereos are important risk factors and should be a priority for public health interventions.
The STEREO Mission: A New Approach to Space Weather Research
NASA Technical Reports Server (NTRS)
Kaiser, michael L.
2006-01-01
With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.
NASA Astrophysics Data System (ADS)
Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.
2009-09-01
The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.
NASA Astrophysics Data System (ADS)
Liewer, P. C.; Qiu, J.; Lindsey, C.
2017-10-01
Seismic maps of the Sun's far hemisphere, computed from Doppler data from the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) are now being used routinely to detect strong magnetic regions on the far side of the Sun (http://jsoc.stanford.edu/data/farside/). To test the reliability of this technique, the helioseismically inferred active region detections are compared with far-side observations of solar activity from the Solar TErrestrial RElations Observatory (STEREO), using brightness in extreme-ultraviolet light (EUV) as a proxy for magnetic fields. Two approaches are used to analyze nine months of STEREO and HMI data. In the first approach, we determine whether new large east-limb active regions are detected seismically on the far side before they appear Earth side and study how the detectability of these regions relates to their EUV intensity. We find that while there is a range of EUV intensities for which far-side regions may or may not be detected seismically, there appears to be an intensity level above which they are almost always detected and an intensity level below which they are never detected. In the second approach, we analyze concurrent extreme-ultraviolet and helioseismic far-side observations. We find that 100% (22) of the far-side seismic regions correspond to an extreme-ultraviolet plage; 95% of these either became a NOAA-designated magnetic region when reaching the east limb or were one before crossing to the far side. A low but significant correlation is found between the seismic signature strength and the EUV intensity of a far-side region.
Mastcam Stereo Analysis and Mosaics (MSAM)
NASA Astrophysics Data System (ADS)
Deen, R. G.; Maki, J. N.; Algermissen, S. S.; Abarca, H. E.; Ruoff, N. A.
2017-06-01
Describes a new PDART task that will generate stereo analysis products (XYZ, slope, etc.), terrain meshes, and mosaics (stereo, ortho, and Mast/Nav combos) for all MSL Mastcam images and deliver the results to PDS.
BRDF invariant stereo using light transport constancy.
Wang, Liang; Yang, Ruigang; Davis, James E
2007-09-01
Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.
Top of Mars Rover Curiosity Remote Sensing Mast
2011-04-06
The remote sensing mast on NASA Mars rover Curiosity holds two science instruments for studying the rover surroundings and two stereo navigation cameras for use in driving the rover and planning rover activities.
Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System
2015-03-01
PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2017-05-01
The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).
Leisure Activity Participation of Elderly Individuals with Low Vision.
ERIC Educational Resources Information Center
Heinemann, Allen W.
1988-01-01
Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…
Binocular stereo-navigation for three-dimensional thoracoscopic lung resection.
Kanzaki, Masato; Isaka, Tamami; Kikkawa, Takuma; Sakamoto, Kei; Yoshiya, Takehito; Mitsuboshi, Shota; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa
2015-05-08
This study investigated the efficacy of binocular stereo-navigation during three-dimensional (3-D) thoracoscopic sublobar resection (TSLR). From July 2001, the authors' department began to use a virtual 3-D pulmonary model on a personal computer (PC) for preoperative simulation before thoracoscopic lung resection and for intraoperative navigation during operation. From 120 of 1-mm thin-sliced high-resolution computed tomography (HRCT)-scan images of tumor and hilum, homemade software CTTRY allowed sugeons to mark pulmonary arteries, veins, bronchi, and tumor on the HRCT images manually. The location and thickness of pulmonary vessels and bronchi were rendered as diverse size cylinders. With the resulting numerical data, a 3-D image was reconstructed by Metasequoia shareware. Subsequently, the data of reconstructed 3-D images were converted to Autodesk data, which appeared on a stereoscopic-vision display. Surgeons wearing 3-D polarized glasses performed 3-D TSLR. The patients consisted of 5 men and 5 women, ranging in age from 65 to 84 years. The clinical diagnoses were a primary lung cancer in 6 cases and a solitary metastatic lung tumor in 4 cases. Eight single segmentectomies, one bi-segmentectomy, and one bi-subsegmentectomy were performed. Hilar lymphadenectomy with mediastinal lymph node sampling has been performed in 6 primary lung cancers, but four patients with metastatic lung tumors were performed without lymphadenectomy. The operation time and estimated blood loss ranged from 125 to 333 min and from 5 to 187 g, respectively. There were no intraoperative complications and no conversion to open thoracotomy and lobectomy. Postoperative courses of eight patients were uneventful, and another two patients had a prolonged lung air leak. The drainage duration and hospital stay ranged from 2 to 13 days and from 8 to 19 days, respectively. The tumor histology of primary lung cancer showed 5 adenocarcinoma and 1 squamous cell carcinoma. All primary lung cancers were at stage IA. The organs having metastatic pulmonary tumors were kidney, bladder, breast, and rectum. No patients had macroscopically positive surgical margins. Binocular stereo-navigation was able to identify the bronchovascular structures accurately and suitable to perform TSLR with a sufficient margin for small pulmonary tumors.
Terrestrial multi-view photogrammetry for landslide monitoring
NASA Astrophysics Data System (ADS)
Stumpf, A.; Malet, J.; Allemand, P.; Skupinski, G.; Pierrot-Deseilligny, M.
2013-12-01
Multi-view stereo (MVS) surface reconstruction from large photo collections is being increasingly used for geoscience applications, and a number of different software solution and processing streamlines have been suggested. Open source libraries to perform feature point extraction, pose estimation, bundle adjustment and dense matching are available providing high quality results at low costs, and transparency of the implemented algorithms. Within the computer vision community benchmark datasets with toy examples and architectural scenes are frequently used to evaluate dense matching algorithms but relatively few studies have addressed the evaluation of complete processing pipelines for complex natural landscapes such as landslides developed in high mountain terrains. In order to obtain surface displacement maps of an active landslide (Super-Sauze, Southern French Alps) from multi-temporal terrestrial photographs over a period of three years, this work targeted the evaluation of three different non-commercial processing pipelines. The tested packages include VisualSfM[1], CMVS-PMVS [2], Apero and MicMac [URL]. The image acquisition focused on either subparts of the landslide (toe, main scarp) or targeted the reconstruction of a global model of the entire landslide. All images were processed with three different pipelines namely VisualSfM + CMVS-PMVS, Apero + CMVS-PMVS and Apero + MicMac and the resulting point clouds were evaluated with terrestrial and airborne LiDAR. Our results show that all multi-view stereo pipelines provide useful results to quantify surface displacement at accuracies between 1-10 cm depending on the acquisition geometry and the object distance. For pose estimation and bundle adjustment, Apero is the more accurate and versatile tool allowing the use of more sophisticated lens models and the direct integration of ground control points in the bundle adjustment. The dense matching algorithms with MicMac enables the reconstruction of denser point clouds, with fewer outliers, better spatial coverage and at lower computational costs, whereas CMVS-PMVS requires less manual tuning and produces fewer artifacts at discontinuities and areas with very low incidence angles. Change detection among the multi-temporal photogrammetric point clouds allowed to measure surface displacement rates greater than 1 m.yr-1 at the landslide toe, and greater than 3 m.yr-1 in the upper most active landslide part, indicating and important mass-accumulation in the central part. Large low frequency rockfall dominate the mass wasting process at the main scarp when compared to erosive retrogression. The study demonstrates that MVS has a great potential to replace LiDAR surveys for operational landslide monitoring providing comparable accuracies at significantly lower logistic and material costs. However, an optimal acquisition geometry and parameterization of the processing algorithms are important factors for its successful application and some recommendations, potential pitfalls and limitations are highlighted. [1] C. Wu, Towards Linear-time Incremental Structure from Motion, Internat. Conf. on 3D Vision, University of Washington, Seattle, USA, 2013. [2] Y. Furukawa and J. Ponce, "Accurate, Dense, and Robust Multiview Stereopsis," Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32, pp. 1362-1376, 2010.
2006-10-11
KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers check the clearance of the STEREO spacecraft as it is moved away from the opening. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted off its transporter alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted up toward the platform on the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - Viewed from inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers watch the progress of the STEREO spacecraft being lifted. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin maneuvering the STEREO spacecraft into the mobile service tower. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers observe the progress of the STEREO spacecraft as it glides inside the mobile service tower. After it is in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft waits for a crane to be fitted over it and be lifted into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
2006-10-11
KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft is fitted with a crane to lift it into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner
NASA Astrophysics Data System (ADS)
Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.
2018-05-01
Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.
A flexible 3D laser scanning system using a robotic arm
NASA Astrophysics Data System (ADS)
Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang
2017-06-01
In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.
The Virtual Pelvic Floor, a tele-immersive educational environment.
Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.
1999-01-01
This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378
Sabel, Bernhard A; Cárdenas-Morales, Lizbeth; Gao, Ying
2018-01-01
How to cite this article: Sabel BA, Cárdenas-Morales L, Gao Y. Vision Restoration in Glaucoma by activating Residual Vision with a Holistic, Clinical Approach: A Review. J Curr Glaucoma Pract 2018;12(1):1-9.
NASA Astrophysics Data System (ADS)
Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.
2007-05-01
STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Static summary plots and a key-parameter type data set with a related online browser provide alternative data access. Finally, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross- spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.
Forest abovegroundbiomass mapping using spaceborne stereo imagery acquired by Chinese ZY-3
NASA Astrophysics Data System (ADS)
Sun, G.; Ni, W.; Zhang, Z.; Xiong, C.
2015-12-01
Besides LiDAR data, another valuable type of data which is also directly sensitive to forest vertical structures and more suitable for regional mapping of forest biomass is the stereo imagery or photogrammetry. Photogrammetry is the traditional technique for deriving terrain elevation. The elevation of the top of a tree canopy can be directly measured from stereo imagery but winter images are required to get the elevation of ground surface because stereo images are acquired by optical sensors which cannot penetrate dense forest canopies with leaf-on condition. Several spaceborne stereoscopic systems with higher spatial resolutions have been launched in the past several years. For example the Chinese satellite Zi Yuan 3 (ZY-3) specifically designed for the collection of stereo imagery with a resolution of 3.6 m for forward and backward views and 2.1 m for the nadir view was launched on January 9, 2012. Our previous studies have demonstrated that the spaceborne stereo imagery acquired in summer has good performance on the description of forest structures. The ground surface elevation could be extracted from spaceborne stereo imagery acquired in winter. This study mainly focused on assessing the mapping of forest biomass through the combination of spaceborne stereo imagery acquired in summer and those in winter. The test sites of this study located at Daxing AnlingMountains areas as shown in Fig.1. The Daxing Anling site is on the south border of boreal forest belonging to frigid-temperate zone coniferous forest vegetation The dominant tree species is Dhurian larch (Larix gmelinii). 10 scenes of ZY-3 stereo images are used in this study. 5 scenes were acquired on March 14,2012 while the other 5 scenes were acquired on September 7, 2012. Their spatial coverage is shown in Fig.2-a. Fig.2-b is the mosaic of nadir images acquired on 09/07/2012 while Fig.2-c is thecorresponding digital surface model (DSM) derived from stereo images acquired on 09/07/2012. Fig.2-d is the difference between the DSM derived from stereo imagery acquired on 09/07/2012 and the digital elevation model (DEM) from stereo imagery acquired on 03/14/2012.The detailed analysis will be given in the final report.
NASA Technical Reports Server (NTRS)
Dunham, David W.; Guzman, Jose J.; Sharer, Peter J.; Friessen, Henry D.
2007-01-01
STEREO (Solar-TErestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA). STEREO is the first mission to utilize phasing loops and multiple lunar flybys to alter the trajectories of more than one satellite. This paper describes the launch computation methodology, the launch constraints, and the resulting nine launch windows that were prepared for STEREO. More details are provided for the window in late October 2006 that was actually used.
2009-04-13
Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, makes a point during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathkurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)
2006-06-16
KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" is secure on the stand. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton
Deblocking of mobile stereo video
NASA Astrophysics Data System (ADS)
Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen
2012-02-01
Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.
NASA Astrophysics Data System (ADS)
Altug, Erdinc
Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
Automatic Detection and Reproduction of Natural Head Position in Stereo-Photogrammetry.
Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong
2015-01-01
The aim of this study was to develop an automatic orientation calibration and reproduction method for recording the natural head position (NHP) in stereo-photogrammetry (SP). A board was used as the physical reference carrier for true verticals and NHP alignment mirror orientation. Orientation axes were detected and saved from the digital mesh model of the board. They were used for correcting the pitch, roll and yaw angles of the subsequent captures of patients' facial surfaces, which were obtained without any markings or sensors attached onto the patient. We tested the proposed method on two commercial active (3dMD) and passive (DI3D) SP devices. The reliability of the pitch, roll and yaw for the board placement were within ±0.039904°, ±0.081623°, and ±0.062320°; where standard deviations were 0.020234°, 0.045645° and 0.027211° respectively. Orientation-calibrated stereo-photogrammetry is the most accurate method (angulation deviation within ±0.1°) reported for complete NHP recording with insignificant clinical error.
Automatic Detection and Reproduction of Natural Head Position in Stereo-Photogrammetry
Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong
2015-01-01
The aim of this study was to develop an automatic orientation calibration and reproduction method for recording the natural head position (NHP) in stereo-photogrammetry (SP). A board was used as the physical reference carrier for true verticals and NHP alignment mirror orientation. Orientation axes were detected and saved from the digital mesh model of the board. They were used for correcting the pitch, roll and yaw angles of the subsequent captures of patients’ facial surfaces, which were obtained without any markings or sensors attached onto the patient. We tested the proposed method on two commercial active (3dMD) and passive (DI3D) SP devices. The reliability of the pitch, roll and yaw for the board placement were within ±0.039904°, ±0.081623°, and ±0.062320°; where standard deviations were 0.020234°, 0.045645° and 0.027211° respectively. Conclusion: Orientation-calibrated stereo-photogrammetry is the most accurate method (angulation deviation within ±0.1°) reported for complete NHP recording with insignificant clinical error. PMID:26125616
NASA Technical Reports Server (NTRS)
Cattell, Cynthia; Breneman, A.; Goetz, K.; Kellogg, P.; Kersten, K.; Wygant, J.; Wilson, L. B., III; Looper, Mark D.; Blake, J. Bernard; Roth, I.
2012-01-01
One of the critical problems for understanding the dynamics of Earth's radiation belts is determining the physical processes that energize and scatter relativistic electrons. We review measurements from the Wind/Waves and STEREO S/Waves waveform capture instruments of large amplitude whistler-mode waves. These observations have provided strong evidence that large amplitude (100s mV/m) whistler-mode waves are common during magnetically active periods. The large amplitude whistlers have characteristics that are different from typical chorus. They are usually nondispersive and obliquely propagating, with a large longitudinal electric field and significant parallel electric field. We will also review comparisons of STEREO and Wind wave observations with SAMPEX observations of electron microbursts. Simulations show that the waves can result in energization by many MeV and/or scattering by large angles during a single wave packet encounter due to coherent, nonlinear processes including trapping. The experimental observations combined with simulations suggest that quasilinear theoretical models of electron energization and scattering via small-amplitude waves, with timescales of hours to days, may be inadequate for understanding radiation belt dynamics.
Perceptual Learning Improves Stereoacuity in Amblyopia
Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing
2014-01-01
Purpose. Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Methods. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red–green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Results. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Conclusions. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract PMID:24508791
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.
Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang
2017-10-16
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang
2017-01-01
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295
Multi-view line-scan inspection system using planar mirrors
NASA Astrophysics Data System (ADS)
Holländer, Bransilav; Štolc, Svorad; Huber-Mörk, Reinhold
2013-04-01
We demonstrate the design, setup, and results for a line-scan stereo image acquisition system using a single area- scan sensor, single lens and two planar mirrors attached to the acquisition device. The acquired object is moving relatively to the acquisition device and is observed under three different angles at the same time. Depending on the specific configuration it is possible to observe the object under a straight view (i.e., looking along the optical axis) and two skewed views. The relative motion between an object and the acquisition device automatically fulfills the epipolar constraint in stereo vision. The choice of lines to be extracted from the CMOS sensor depends on various factors such as the number, position and size of the mirrors, the optical and sensor configuration, or other application-specific parameters like desired depth resolution. The acquisition setup presented in this paper is suitable for the inspection of a printed matter, small parts or security features such as optical variable devices and holograms. The image processing pipeline applied to the extracted sensor lines is explained in detail. The effective depth resolution achieved by the presented system, assembled from only off-the-shelf components, is approximately equal to the spatial resolution and can be smoothly controlled by changing positions and angles of the mirrors. Actual performance of the device is demonstrated on a 3D-printed ground-truth object as well as two real-world examples: (i) the EUR-100 banknote - a high-quality printed matter and (ii) the hologram at the EUR-50 banknote { an optical variable device.
Perceptual learning improves stereoacuity in amblyopia.
Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing
2014-04-15
Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red-green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract.
2009-04-13
Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, makes a comment during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, second from left, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathakurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)
2009-04-13
Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, second from left, makes a comment during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathakurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)
2006-06-16
KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" as it is lifted off a tilt table. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton
2006-06-16
KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the STEREO spacecraft "B" is being moved to a another stand nearby for testing. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton
2006-06-16
KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" as it is lowered toward a stand on the floor. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton
Lorenzo, Julia; Montaña, Ángel M
2016-09-01
Molecular shape similarity and field similarity have been used to interpret, in a qualitative way, the structure-activity relationships in a selected series of platinum(IV) complexes with anticancer activity. MM and QM calculations have been used to estimate the electron density, electrostatic potential maps, partial charges, dipolar moments and other parameters to correlate the stereo-electronic properties with the differential biological activity of complexes. Extended Electron Distribution (XED) field similarity has been also evaluated for the free 1,4-diamino carrier ligands, in a fragment-based drug design approach, comparing Connolly solvent excluded surface, hydrophobicity field surface, Van der Waals field surface, nucleophilicity field surface, electrophilicity field surface and the extended electron-distribution maxima field points. A consistency has been found when comparing the stereo-electronic properties of the studied series of platinum(IV) complexes and/or the free ligands evaluated and their in vitro anticancer activity. Copyright © 2016 Elsevier Inc. All rights reserved.
On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV
NASA Astrophysics Data System (ADS)
Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.
2011-03-01
Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.
Preparing WIND for the STEREO Mission
NASA Astrophysics Data System (ADS)
Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.
2006-05-01
The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.
Field Exploration Science for a Return to the Moon
NASA Astrophysics Data System (ADS)
Schmitt, H. H.; Helper, M. A.; Muehlbberger, W.; Snoke, A. W.
2006-12-01
Apollo field exploration science, and subsequent analysis, and interpretation of its findings and collected samples, underpin our current understanding of the origin and history of the Moon. That understanding, in turn, continues to provide new and important insights into the early histories of the Earth and other bodies in the solar system, particularly during the period that life formed and began to evolve on Earth and possibly on Mars. Those early explorations also have disclosed significant and potentially commercially viable lunar resources that might help satisfy future demand for both terrestrial energy alternatives and space consumables. Lunar sortie missions as part of the Vision for Space Exploration provide an opportunity to continue and expand the human geological, geochemical and geophysical exploration of the Moon. Specific objectives of future field exploration science include: (1) Testing of the consensus "giant impact" hypothesis for the origin of the Moon by further investigation of materials that may augment understanding of the chondritic geochemistry of the lower lunar mantle; (2) Testing of the consensus impact "cataclysm" hypothesis by obtaining absolute ages on large lunar basins of relative ages older than the 3.8-3.9 Ga mascon basins dated by Apollo 15 and 17; (3) Calibration of the end of large impacts in the inner solar system; (4) Global delineation of the internal structure of the Moon; (5) Global sampling and field investigations that extend the data necessary to remotely correlate major lunar geological and geochemical units; (6) Definition of the depositional history of polar volatiles - cometary, solar wind, or otherwise; (7) Determine the recoverable in situ concentrations and distribution of potential volatile resources; and (8) Acquisition of information and samples related to relatively less site-specific aspects of lunar geological processes. Planning for renewed field exploration of the Moon depends largely on the selection, training and use of sortie crews; the selection of landing sites; and the adopted operational approach to sortie extravehicular activity (EVA). The equipment necessary for successful exploration consists of that required for sampling, sample documentation, communications, mobility, and position knowledge. Other types of active geophysical. geochemical and petrographic equipment, if available, could clearly enhance the scientific and operational return of extended exploration over that possible during Apollo missions. Equipment to increase the efficiency of exploration should include the following, helmet-mounted, systems: (1) voice activated or automatic, electronic, stereo photo-documentation camera that is photometrically and geometrically fully calibrated; (2) automatic position and elevation determination system; and (3) laser-ranging device, aligned with the stereo camera axis. Heads-up displays and controls on the helmet, activated and selected by voice, should be available for control and use of this equipment.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.