Sample records for stereo object tracking

  1. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  2. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  3. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    PubMed

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  4. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  5. Robust multiperson tracking from a mobile platform.

    PubMed

    Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc

    2009-10-01

    In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.

  6. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  7. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  8. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  9. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  10. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  11. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  12. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  13. Specialization of Perceptual Processes.

    DTIC Science & Technology

    1994-09-01

    population rose and fell, furniture was rearranged, a small mountain range was built in part of the lab (really), carpets were shampooed , and oce lighting...common task is the tracking of moving objects. Coombs [22] implemented a system 44 for xating and tracking objects using a stereo eye/ head system...be a person (person?). Finally, a motion unit is used to detect foot gestures. A pair of nod-of-the- head detectors were implemented and tested, but

  14. MISR RICO Products

    Atmospheric Science Data Center

    2016-11-25

    ... microphysics of the transition to a mature rainshaft, organization of trade wind clouds, water budget of trade wind cumulus, and the ... (MISR) mission objectives involve providing accurate information on cloud cover, cloud-track winds, stereo-derived cloud-top ...

  15. Efficiency of extracting stereo-driven object motions

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2013-01-01

    Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues. PMID:23325345

  16. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    NASA Astrophysics Data System (ADS)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  17. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    PubMed

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  18. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  19. Stereo vision tracking of multiple objects in complex indoor environments.

    PubMed

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  20. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

  1. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  2. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  3. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  4. Effective declutter of complex flight displays using stereoptic 3-D cueing

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Nold, Dean E.

    1994-01-01

    The application of stereo technology to new, integrated pictorial display formats has been effective in situational awareness enhancements, and stereo has been postulated to be effective for the declutter of complex informational displays. This paper reports a full-factorial workstation experiment performed to verify the potential benefits of stereo cueing for the declutter function in a simulated tracking task. The experimental symbology was designed similar to that of a conventional flight director, although the format was an intentionally confused presentation that resulted in a very cluttered dynamic display. The subject's task was to use a hand controller to keep a tracking symbol, an 'X', on top of a target symbol, another X, which was being randomly driven. In the basic tracking task, both the target symbol and the tracking symbol were presented as red X's. The presence of color coding was used to provide some declutter, thus making the task more reasonable to perform. For this condition, the target symbol was coded red, and the tracking symbol was coded blue. Noise conditions, or additional clutter, were provided by the inclusion of randomly moving, differently colored X symbols. Stereo depth, which was hypothesized to declutter the display, was utilized by placing any noise in a plane in front of the display monitor, the tracking symbol at screen depth, and the target symbol behind the screen. The results from analyzing the performances of eight subjects revealed that the stereo presentation effectively offsets the cluttering effects of both the noise and the absence of color coding. The potential of stereo cueing to declutter complex informational displays has therefore been verified; this ability to declutter is an additional benefit from the application of stereoptic cueing to pictorial flight displays.

  5. Three-camera stereo vision for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.

    1997-02-01

    A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.

  6. Railway clearance intrusion detection method with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  7. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  8. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    PubMed

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  9. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  10. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  11. CME Research and Space Weather Support for the SECCHI Experiments on the STEREO Mission

    DTIC Science & Technology

    2014-01-14

    Corbett, ed., Cambridge Univ. Press (2010) Kahler, S.W. and D. F. Webb, "Tracking Nonradial Motions and Azimuthal Expansions of Interplanetary CME...Imaging and In-situ Data from LASCO, STEREO and SMEI", Bull. AAS, 41(2), p. 855, 2009. Kahler S. and D. Webb, "Tracking Nonradial Motions and

  12. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2017-01-01

    Any exploration vehicle assembled or Spacecraft placed in LEO or GTO must pass through this debris cloud and survive. Large cross section, low thrust vehicles will spend more time spiraling out through the cloud and will suffer more impacts.Better knowledge of small debris will improve survival odds. Current estimated Density of debris at various orbital attitudes with notation of recent collisions and resulting spikes. Orbital Debris Tracking and Characterization has now been added to NASA Office of Chief Technologists Technology Development Roadmap in Technology Area 5 (TA5.7)[Orbital Debris Tracking and Characterization] and is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crews due to the risk of Orbital Debris damage to ISS Exploration vehicles. The Problem: Traditional orbital trackers looking for small, dim orbital derelicts and debris typically will stare at the stars and let any reflected light off the debris integrate in the imager for seconds, thus creating a streak across the image. The Solution: The Small Tracker will see Stars and other celestial objects rise through its Field of View (FOV) at the rotational rate of its orbit, but the glint off of orbital objects will move through the FOV at different rates and directions. Debris on a head-on collision course (or close) will stay in the FOV at 14 Km per sec. The Small Tracker can track at 60 frames per sec allowing up to 30 fixes before a near-miss pass. A Stereo pair of Small Trackers can provide range data within 5-7 Km for better orbit measurements.

  13. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-09-26

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  14. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  15. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  16. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  17. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    PubMed Central

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  18. Virtual integral holography

    NASA Astrophysics Data System (ADS)

    Venolia, Dan S.; Williams, Lance

    1990-08-01

    A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.

  19. Sampling artifacts in perspective and stereo displays

    NASA Astrophysics Data System (ADS)

    Pfautz, Jonathan D.

    2001-06-01

    The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.

  20. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends.

    This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction.

    Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  1. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.

  2. Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC

    NASA Astrophysics Data System (ADS)

    Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.

  3. Viewing The Entire Sun With STEREO And SDO

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.

    2011-05-01

    On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.

  4. Efficient hybrid monocular-stereo approach to on-board video-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo

    2012-01-01

    In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.

  5. A helmet mounted display to adapt the telerobotic environment to human vision

    NASA Technical Reports Server (NTRS)

    Tharp, Gregory; Liu, Andrew; Yamashita, Hitomi; Stark, Lawrence

    1990-01-01

    A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects.

  6. The contribution of stereo vision to the control of braking.

    PubMed

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  7. Sensing and perception research for space telerobotics at JPL

    NASA Technical Reports Server (NTRS)

    Gennery, Donald B.; Litwin, Todd; Wilcox, Brian; Bon, Bruce

    1987-01-01

    PIFLEX is a pipelined-image processor that can perform elaborate computations whose exact nature is not fixed in the hardware, and that can handle multiple images. A wire-wrapped prototype PIFEX module has been produced and debugged, using a version of the convolver composed of three custom VLSI chips (plus the line buffers). A printed circuit layout is being designed for use with a single-chip convolver, leading to production of a PIFEX with about 120 modules. A high-level language for programming PIFEX has been designed, and a compiler will be written for it. The camera calibration software has been completed and tested. Two more terms in the camera model, for lens distortion, probably will be added later. The acquisition and tracking system has been designed and most of it has been coded in Pascal for the MicroVAX-II. The feature tracker, motion stereo module and stereo matcher have executed successfully. The model matcher is still under development, and coding has begun on the tracking initializer. The object tracker was running on a different computer from the VAX, and preliminary runs on real images have been performed there. Once all modules are working, optimization and integration will begin. Finally, when a sufficiently large PIFEX is available, appropriate parts of acquisition and tracking, including much of the feature tracker, will be programmed into PIFEX, thus increasing the speed and robustness of the system.

  8. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  9. TOPSAT: Global space topographic mission

    NASA Technical Reports Server (NTRS)

    Vetrella, Sergio

    1993-01-01

    Viewgraphs on TOPSAT Global Space Topographic Mission are presented. Topics covered include: polar region applications; terrestrial ecosystem applications; stereo electro-optical sensors; space-based stereoscopic missions; optical stereo approach; radar interferometry; along track interferometry; TOPSAT-VISTA system approach; ISARA system approach; topographic mapping laser altimeter; and role of multi-beam laser altimeter.

  10. STEREO SECCHI Observations of Space Debris: Are They Associated with S/WAVES Dust Detections?

    NASA Astrophysics Data System (ADS)

    St. Cyr, O. C.; Howard, R. A.; Wang, D.; Thompson, W. T.; Harrison, R. A.; Kaiser, M. L.

    2007-12-01

    White-light coronagraphs are optimized to reject stray light in order to accomplish their primary science objective - - the observation of coronal mass ejections (CMEs) and the corona. Because they were designed to detect these faint signals while pointing at the Sun, many spacebased coronagraphs in the past (Skylab, SMM, SOHO) have detected "debris" apparently associated with the vehicle. These appear to be sunlit particles very near the front of the telescope aperture (~meters). In at least one case, these earlier debris sightings were interpreted as deteriorating insulation from the thermal blankets on the spacecraft (St. Cyr and Warner, 1991ASPC...17..126S); and for the earlier Sklyab observations, the sightings were believed to be associated with water droplets (Eddy, "A New Sun: The Solar Results from Skylab", NASA SP-402, p119, 1979.) The STEREO SECCHI suite of white-light coronagraphs represents the most recent instantations of these specialized instruments, and for the first time we are able to track CMEs from their initiation at the Sun out to 1 A.U. Since observations commenced, the SECCHI white-light telescopes have been sporadically detecting debris particles. Most of the detections are individual or small numbers of bright objects in the field which therefore do not affect the primary science goals of the mission. But on several occasions in the eight months' of observation there have been "swarms" of these bright objects which completely obscure the field of view of one or more instrument for a brief period of time. Here we report on the intriguing possibility that the SECCHI debris sightings represent particles of thermal insulation, ejected from the spacecraft by interplanetary dust impacts. Because of the large field of view and high duty cycle of the Heliospheric Imagers on STEREO, we may be able to demonstrate that some of these have also been detected by STEREO S/WAVES as sporadic plasma emissions.

  11. Opportunity's Surroundings on Sol 1798 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  13. A novel craniotomy simulation system for evaluation of stereo-pair reconstruction fidelity and tracking

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Conley, Rebekah H.; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2016-03-01

    Brain shift compensation using computer modeling strategies is an important research area in the field of image-guided neurosurgery (IGNS). One important source of available sparse data during surgery to drive these frameworks is deformation tracking of the visible cortical surface. Possible methods to measure intra-operative cortical displacement include laser range scanners (LRS), which typically complicate the clinical workflow, and reconstruction of cortical surfaces from stereo pairs acquired with the operating microscopes. In this work, we propose and demonstrate a craniotomy simulation device that permits simulating realistic cortical displacements designed to measure and validate the proposed intra-operative cortical shift measurement systems. The device permits 3D deformations of a mock cortical surface which consists of a membrane made of a Dragon Skin® high performance silicone rubber on which vascular patterns are drawn. We then use this device to validate our stereo pair-based surface reconstruction system by comparing landmark positions and displacements measured with our systems to those positions and displacements as measured by a stylus tracked by a commercial optical system. Our results show a 1mm average difference in localization error and a 1.2mm average difference in displacement measurement. These results suggest that our stereo-pair technique is accurate enough for estimating intra-operative displacements in near real-time without affecting the surgical workflow.

  14. KSC-06pd2272

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers check the clearance of the STEREO spacecraft as it is moved away from the opening. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  15. KSC-06pd2266

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted off its transporter alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  16. KSC-06pd2268

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted up toward the platform on the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  17. KSC-06pd2269

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Viewed from inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers watch the progress of the STEREO spacecraft being lifted. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  18. KSC-06pd2270

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin maneuvering the STEREO spacecraft into the mobile service tower. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  19. KSC-06pd2271

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers observe the progress of the STEREO spacecraft as it glides inside the mobile service tower. After it is in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  20. KSC-06pd2267

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  1. KSC-06pd2264

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft waits for a crane to be fitted over it and be lifted into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  2. KSC-06pd2265

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft is fitted with a crane to lift it into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  3. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  4. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  5. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  6. Finger tracking for hand-held device interface using profile-matching stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  7. Simultaneous glacier surface elevation and flow velocity mapping from cross-track pushbroom satellite Imagery

    NASA Astrophysics Data System (ADS)

    Noh, M. J.; Howat, I. M.

    2017-12-01

    Glaciers and ice sheets are changing rapidly. Digital Elevation Models (DEMs) and Velocity Maps (VMs) obtained from repeat satellite imagery provide critical measurements of changes in glacier dynamics and mass balance over large, remote areas. DEMs created from stereopairs obtained during the same satellite pass through sensor re-pointing (i.e. "in-track stereo") have been most commonly used. In-track stereo has the advantage of minimizing the time separation and, thus, surface motion between image acquisitions, so that the ice surface can be assumed motionless in when collocating pixels between image pairs. Since the DEM extraction process assumes that all motion between collocated pixels is due to parallax or sensor model error, significant ice motion results in DEM quality loss or failure. In-track stereo, however, puts a greater demand on satellite tasking resources and, therefore, is much less abundant than single-scan imagery. Thus, if ice surface motion can be mitigated, the ability to extract surface elevation measurements from pairs of repeat single-scan "cross-track" imagery would greatly increase the extent and temporal resolution of ice surface change. Additionally, the ice motion measured by the DEM extraction process would itself provide a useful velocity measurement. We develop a novel algorithm for generating high-quality DEMs and VMs from cross-track image pairs without any prior information using the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm and its sensor model bias correction capabilities. Using a test suite of repeat, single-scan imagery from WorldView and QuickBird sensors collected over fast-moving outlet glaciers, we develop a method by which RPC biases between images are first calculated and removed over ice-free surfaces. Subpixel displacements over the ice are then constrained and used to correct the parallax estimate. Initial tests yield DEM results with the same quality as in-track stereo for cases where snowfall has not occurred between the two images and when the images have similar ground sample distances. The resulting velocity map also closely matches independent measurements.

  8. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  9. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  10. Determining Aerosol Plume Height from Two GEO Imagers: Lessons from MISR and GOES

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2012-01-01

    Aerosol plume height is a key parameter to determine impacts of particulate matters generated from biomass burning, wind-blowing dust, and volcano eruption. Retrieving cloud top height from stereo imageries from two GOES (Geostationary Operational Environmental Satellites) have been demonstrated since 1970's and the principle should work for aerosol plumes if they are optically thick. The stereo technique has also been used by MISR (Multiangle Imaging SpectroRadiometer) since 2000 that has nine look angles along track to provide aerosol height measurements. Knowing the height of volcano aerosol layers is as important as tracking the ash plume flow for aviation safety. Lack of knowledge about ash plume height during the 2010 Eyja'rjallajokull eruption resulted in the largest air-traffic shutdown in Europe since World War II. We will discuss potential applications of Asian GEO satellites to make stereo measurements for dust and volcano plumes.

  11. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  12. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  13. KSC-06pd2263

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - With a convoy of escorts, the STEREO spacecraft is transported to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  14. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  15. The Kinect as an interventional tracking system

    NASA Astrophysics Data System (ADS)

    Wang, Xiang L.; Stolka, Philipp J.; Boctor, Emad; Hager, Gregory; Choti, Michael

    2012-02-01

    This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability limits, we describe a medical needle-tracking system for interventional applications based on the Kinect sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of freedom, and provide information about the most likely candidate.

  16. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  17. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  18. Stereo vision and strabismus

    PubMed Central

    Read, J C A

    2015-01-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234

  19. Autonomous Hazard Checks Leave Patterned Rover Tracks on Mars Stereo

    NASA Image and Video Library

    2011-05-18

    A dance-step pattern is visible in the wheel tracks near the left edge of this scene recorded by NASA Mars Exploration Rover Opportunity on Mars on April 1, 2011. 3D glasses are necessary to view this image.

  20. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  1. STEREO as a "Planetary Hazards" Mission

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Thompson, B. J.

    2014-01-01

    NASA's twin STEREO probes, launched in 2006, have advanced the art and science of space weather forecasting more than any other spacecraft or solar observatory. By surrounding the Sun, they provide previously-impossible early warnings of threats approaching Earth as they develop on the solar far side. They have also revealed the 3D shape and inner structure of CMEs-massive solar storms that can trigger geomagnetic storms when they collide with Earth. This improves the ability of forecasters to anticipate the timing and severity of such events. Moreover, the unique capability of STEREO to track CMEs in three dimensions allows forecasters to make predictions for other planets, giving rise to the possibility of interplanetary space weather forecasting too. STEREO is one of those rare missions for which "planetary hazards" refers to more than one world. The STEREO probes also hold promise for the study of comets and potentially hazardous asteroids.

  2. KSC-06pd2261

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the STEREO spacecraft is being moved out of the high bay. A truck will transport the spacecraft to Launch Pad 17-B on Cape Canaveral Air Force Station where it will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  3. KSC-06pd2261a

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the transporter carrying the STEREO spacecraft is secured to the truck that will transport it to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad, the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  4. KSC-06pd2262

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the transporter carrying the STEREO spacecraft is attached to the truck for transportation to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  5. KSC-06pd2277

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers help guide the upper segement of the transportation canister away from the STEREO spacecraft. STEREO is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  6. Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems

    NASA Astrophysics Data System (ADS)

    Mark, David S.; Waste, Corby

    1997-05-01

    The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.

  7. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  8. People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments

    NASA Astrophysics Data System (ADS)

    Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.

    People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.

  9. Webcams for Bird Detection and Monitoring: A Demonstration Study

    PubMed Central

    Verstraeten, Willem W.; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol

    2010-01-01

    Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step. PMID:22319308

  10. Webcams for bird detection and monitoring: a demonstration study.

    PubMed

    Verstraeten, Willem W; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol

    2010-01-01

    Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step.

  11. Terrain Model Registration for Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam

    2003-01-01

    This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.

  12. Multiview photometric stereo.

    PubMed

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  13. Reconstructing the flight kinematics of swarming and mating in wild mosquitoes

    PubMed Central

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.

    2012-01-01

    We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212

  14. An automated, open-source (NASA Ames Stereo Pipeline) workflow for mass production of high-resolution DEMs from commercial stereo satellite imagery: Application to mountain glacies in the contiguous US

    NASA Astrophysics Data System (ADS)

    Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.

    2016-12-01

    We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5­ m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.

  15. 3D terrain reconstruction using Chang’E-3 PCAM images

    NASA Astrophysics Data System (ADS)

    Chen, Wangli; Zeng, Xingguo; Zhang, Hongbo

    2017-10-01

    In order to improve understanding of the topography of Chang’E-3 landing site, 3D terrain models are reconstructed using PCMA images. PCAM (panoramic cameras) is a stereo camera system with a 27cm baseline on-board Yutu rover. It obtained panoramic images at four detection sites, and can achieve a resolution of 1.48mm/pixel at 10m. So the PCAM images reveal fine details of the detection region. In the method, SIFT is employed for feature description and feature matching. In addition to collinearity equations, the measure of baseline of the stereo system is also used in bundle adjustment to solve orientation parameters of all images. And then, pair-wise depth map computation is applied for dense surface reconstruction. Finally, DTM of the detection region is generated. The DTM covers an area with radius of about 20m, and centering at the location of the camera. In consequence of the design, each individual wheel of Yutu rover can leave three tracks on the surface of moon, and the width between the first and third track is 15cm, and these tracks are clear and distinguishable in images. So we chose the second detection site which is of the best ability of recognition of wheel tracks to evaluate the accuracy of the DTM. We measured the width of wheel tracks every 1.5m from the center of the detection region, and obtained 13 measures. It is noticed that the area where wheel tracks are ambiguous is avoided. Result shows that the mean value of wheel track width is 0.155m with a standard deviation of 0.007m. Generally, the closer to the center the more accurate the measure of wheel width is. This is due to the fact that the deformation of images aggravates with increase distance from the location of the camera, and this induces the decline of DTM quality in far areas. In our work, images of the four detection sites are adjusted independently, and this means that there is no tie point between different sites. So deviations between the locations of the same object measured from DTMs of adjacent detection sites may exist.

  16. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study

    PubMed Central

    2017-01-01

    The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. PMID:29022728

  17. The spacecraft control laboratory experiment optical attitude measurement system

    NASA Technical Reports Server (NTRS)

    Welch, Sharon S.; Montgomery, Raymond C.; Barsky, Michael F.

    1991-01-01

    A stereo camera tracking system was developed to provide a near real-time measure of the position and attitude of the Spacecraft COntrol Laboratory Experiment (SCOLE). The SCOLE is a mockup of the shuttle-like vehicle with an attached flexible mast and (simulated) antenna, and was designed to provide a laboratory environment for the verification and testing of control laws for large flexible spacecraft. Actuators and sensors located on the shuttle and antenna sense the states of the spacecraft and allow the position and attitude to be controlled. The stereo camera tracking system which was developed consists of two position sensitive detector cameras which sense the locations of small infrared LEDs attached to the surface of the shuttle. Information on shuttle position and attitude is provided in six degrees-of-freedom. The design of this optical system, calibration, and tracking algorithm are described. The performance of the system is evaluated for yaw only.

  18. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  19. Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space

    NASA Astrophysics Data System (ADS)

    Jun, Chen; Wenjun, Hou; Qing, Sheng

    After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.

  20. Target tracking and surveillance by fusing stereo and RFID information

    NASA Astrophysics Data System (ADS)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  1. Stereo-Optic High Definition Imaging: A New Technology to Understand Bird and Bat Avoidance of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Evan; Goodale, Wing; Burns, Steve

    There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less

  2. Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    DTIC Science & Technology

    2014-09-01

    inertial measurement sensors such as Achtelik et al .’s implemention of PTAM (parallel tracking and mapping) [15] with a barometric altimeter, stable flights...in indoor and outdoor environments are possible [1]. With a full vison- aided inertial navigation system (VINS), Li et al . have shown remarkable...avoidance on small UAVs. Stereo systems suffer from a similar speed issue, with most modern systems running at or below 30 Hz [8], [27]. Honegger et

  3. Stable Research Platform Workshop

    DTIC Science & Technology

    1988-04-01

    autonomous or manned submersibles, by providing them with a deep underwater garage for launch and recovery. A track system for bringing the vehicle...s;. 10- f(H2) Figure 5 SIO Reference 87-2.0 69 STEREO - PHOTOGRAPHY Figure 6 70 Appendix E -15 0 31 62 93 124 155 DISTANCE, x...WAVE FOLLOWER WITH MULTI-BEAM LASER OPTICAL SENSOR • STEREO -PHOTOQRAPHY • MULTI-FREQUENCY RADAR: 10-100 GHz • SURFACE TENSION SENSORS • LONG WAVE

  4. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  5. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  6. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  7. Handheld pose tracking using vision-inertial sensors with occlusion handling

    NASA Astrophysics Data System (ADS)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  8. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    NASA Astrophysics Data System (ADS)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  9. 3D Tracking of Mating Events in Wild Swarms of the Malaria Mosquito Anopheles gambiae

    PubMed Central

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S.; Dao, Adama; Traoré, Sekou F.; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.

    2013-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010. PMID:22254411

  10. KSC-06pd2275

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers unlatch the transportation canister segments that enclose the STEREO spacecraft. The spacecraft is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  11. KSC-06pd2279

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin removing the protective cover surrounding the STEREO spacecraft. The spacecraft is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  12. KSC-06pd2281

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, the transportation canister and protective cover have been removed from the STEREO spacecraft in preparation for launch. The scheduled launch date is Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  13. KSC-06pd2282

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, the transportation canister and protective cover have been removed from the STEREO spacecraft in preparation for launch. The scheduled launch date is Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  14. KSC-06pd2276

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers observe the lifting of the upper segment of the transportation canister that encloses the STEREO spacecraft. The spacecraft is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  15. KSC-06pd2278

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin removing the lower segment of the transportation canister that encloses the STEREO spacecraft. The spacecraft is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  16. KSC-06pd2280

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin removing the protective cover surrounding the STEREO spacecraft. The spacecraft is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  17. Catheter tracking in an interventional photoacoustic surgical system

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Itsarachaiyot, Yuttana; Kim, Younsu; Zhang, Haichong K.; Taylor, Russell H.; Boctor, Emad M.

    2017-03-01

    In laparoscopic medical procedures, accurate tracking of interventional tools such as catheters are necessary. Current practice for tracking catheters often involve using fluoroscopy, which is best avoided to minimize radiation dose to the patient and the surgical team. Photoacoustic imaging is an emerging imaging modality that can be used for this purpose and does not currently have a general tool tracking solution. Photoacoustic-based catheter tracking would increase its attractiveness, by providing both an imaging and tracking solution. We present a catheter tracking method based on the photoacoustic effect. Photoacoustic markers are simultaneously observed by a stereo camera as well as a piezoelectric element attached to the tip of a catheter. The signals received by the piezoelectric element can be used to compute its position relative to the photoacoustic markers using multilateration. This combined information can be processed to localize the position of the piezoelectric element with respect to the stereo camera system. We presented the methods to enable this work and demonstrated precisions of 1-3mm and a relative accuracy of less than 4% in four independent locations, which are comparable to conventional systems. In addition, we also showed in another experiment a reconstruction precision up to 0.4mm and an estimated accuracy up to 0.5mm. Future work will include simulations to better evaluate this method and its challenges and the development of concurrent photoacoustic marker projection and its associated methods.

  18. The HRSC Experiment on Mars Express: First Imaging Results from the Commissioning Phase

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Neukum, G.; Hoffmann, H.; Jaumann, R.; Hauber, E.; Albertz, J.; McCord, T. B.; Markiewicz, W. J.

    2004-12-01

    The ESA Mars Express spacecraft was launched from Baikonur on June 2, 2003, entered Mars orbit on December 25, 2003, and reached the nominal mapping orbit on January 28, 2004. Observing conditions were favorable early on for the HRSC (High Resolution Stereo Camera), designed for the mapping of the Martian surface in 3-D. The HRSC is a pushbroom scanner with 9 CCD line detectors mounted in parallel and perpendicular to the direction of flight on the focal plane. The camera can obtain images at high resolution (10 m/pix), in triple stereo (20 m/pix), in four colors, and at five different phase angles near-simultaneously. An additional Super-Resolution Channel (SRC) yields nested-in images at 2.3 m/pix for detailed photogeologic studies. Even for nominal spacecraft trajectory and camera pointing data from the commissioning phase, solid stereo image reconstructions are feasible. More yet, the three-line stereo data allow us to identify and correct errors in navigation data. We find that > 99% of the stereo rays intersect within a sphere of radius < 20m after orbit and pointing data correction. From the HRSC images we have produced Digital Terrain Models (DTMs) with pixel sizes of 200 m, some of them better. HRSC stereo models and data obtained by the MOLA (Mars Orbiting Laser Altimeter) show good qualitative agreement. Differences in absolute elevations are within 50 m, but may reach several 100 m in lateral positioning (mostly in the spacecraft along-track direction). After correction of these offsets, the HRSC topographic data conveniently fill the gaps between the MOLA tracks and reveal hitherto unrecognized morphologic detail. At the time of writing, the HRSC has covered approx. 22.5 million square kilometers of the Martian surface. In addition, data from 5 Phobos flybys from May through August 2004 were obtained. The HRSC is beginning to make major contributions to geoscience, atmospheric science, photogrammetry, and cartography of Mars (papers submitted to Nature).

  19. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  20. Tracking multiple surgical instruments in a near-infrared optical system.

    PubMed

    Cai, Ken; Yang, Rongqian; Lin, Qinyong; Wang, Zhigang

    2016-12-01

    Surgical navigation systems can assist doctors in performing more precise and more efficient surgical procedures to avoid various accidents. The near-infrared optical system (NOS) is an important component of surgical navigation systems. However, several surgical instruments are used during surgery, and effectively tracking all of them is challenging. A stereo matching algorithm using two intersecting lines and surgical instrument codes is proposed in this paper. In our NOS, the markers on the surgical instruments can be captured by two near-infrared cameras. After automatically searching and extracting their subpixel coordinates in the left and right images, the coordinates of the real and pseudo markers are determined by the two intersecting lines. Finally, the pseudo markers are removed to achieve accurate stereo matching by summing the codes for the distances between a specific marker with the other two markers on the surgical instrument. Experimental results show that the markers on the different surgical instruments can be automatically and accurately recognized. The NOS can accurately track multiple surgical instruments.

  1. LROC Stereo Observations

    NASA Astrophysics Data System (ADS)

    Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.

    2009-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.

  2. The STEREO Mission: A New Approach to Space Weather Research

    NASA Technical Reports Server (NTRS)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  3. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  4. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  5. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  6. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  7. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  8. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  9. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  10. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  11. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  12. Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1993-01-01

    A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

  13. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  14. After Conquering 'Husband Hill,' Spirit Moves On (Stereo)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA03062

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA03062

    The first explorer ever to scale a summit on another planet, NASA's Mars Exploration Rover Spirit has begun a long trek downward from the top of 'Husband Hill' to new destinations. As shown in this 180-degree panorama from east of the summit, Spirit's earlier tracks are no longer visible. They are off to the west (to the left in this view). Spirit's next destination is 'Haskin Ridge,' straight ahead along the edge of the steep cliff on the right side of this panorama.

    The scene is a mosaic of images that Spirit took with the navigation camera on the rover's 635th Martian day, or sol, (Oct. 16, 2005) of exploration of Gusev Crater on Mars. This stereo view is presented in a cylindrical-perspective projection with geometric seam correction.

  15. Opportunity's Surroundings on Sol 1687 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses.

    Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction.

    Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast.

    This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  16. KSC-06pd2381

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers secure the two halves of the fairing that enclose the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  17. KSC-06pd2379

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers maneuver the second half of the fairing into place around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  18. KSC-06pd2380

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, the two fairing segments close in around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  19. KSC-06pd2377

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, the first half of the fairing is moved into place around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  20. KSC-06pd2375

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers help maneuver one segment of the fairing around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  1. KSC-06pd2378

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers check the placement of the first half of the fairing around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  2. KSC-06pd2373

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers (background) observe the lifting of the two fairing segments that will encapsulate the STEREO spacecraft (foreground). The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  3. KSC-06pd2370

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers prepare the twin observatories known as STEREO for encapsulation in the fairing. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  4. KSC-06pd2372

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers prepare the twin observatories known as STEREO for encapsulation in the fairing. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  5. KSC-06pd2374

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, one segment of the fairing is lifted toward the STEREO spacecraft in the foreground. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  6. KSC-06pd2376

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers help maneuver one segment of the fairing around the STEREO spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  7. KSC-06pd2371

    NASA Image and Video Library

    2006-10-19

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers prepare the twin observatories known as STEREO for encapsulation in the fairing. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. The STEREO (Solar Terrestrial Relations Observatory) mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. Designed and built by the Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  8. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  9. A portable multi-channel recording system for analysis of acceleration and angular velocity in six dimension.

    PubMed

    Yamashita, M; Yamashita, A; Ishii, T; Naruo, Y; Nagatomo, M

    1998-11-01

    A portable recording system was developed for analysis of more than three analog signals collected in field works. Stereo audio recorder, available as consumer products, was made use for a core cornponent of the system. For the two tracks of recording, a multiplexed analog signal is stored on one track, and reference code on the other track. The reference code indicates the start of one cycle for multiplexing and swiching point of each channel. Multiplexed signal is playbacked and decoded with a reference of the code to reconstruct original profiles of the signal. Since commercial stereo recorders have cut DC component off, a fixed reference voltage is inserted in the sequence of multiplexing. Change of voltage at switching from the reference to the data channel is measured from playbacked signal to get the original data with its DC component. Movement of vehicles and human head were analyzed by the system. It was verified to be capable to record and analyze multi-channel signal at a sampling rate more than 10Hz.

  10. A 18 m 2 cylindrical tracking detector made of 2.6 m long, stereo mylar straw tubes with 100 μm resolution

    NASA Astrophysics Data System (ADS)

    Benussi, L.; Bertani, M.; Bianco, S.; Fabbri, F. L.; Gianotti, P.; Giardoni, M.; Ghezzo, A.; Guaraldo, C.; Lanaro, A.; Locchi, P.; Lu, J.; Lucherini, V.; Mecozzi, A.; Pace, E.; Passamonti, L.; Qaisar, N.; Ricciardi, A.; Sarwar, S.; Serdyouk, V.; Trasatti, L.; Volkov, A.; Zia, A.

    1998-12-01

    An array of 2424 2.6 m-long, 15 mm-diameter mylar straw tubes, arranged in two axial and four stereo layers, has been assembled. The array covers a cylindrical tracking surface of 18 m 2 and provides coordinate measurement in the drift direction and along the wire. A correction of the systematic effects which are introduced by gravitational sag and electrostatics, thus dominating the detector performance especially with long straws, allows to determine wire position from drift-time distribution. The correction has been applied to reach a space resolution of 40 μm with DME, 100 μm with Ar+C 2H 6, and 100-200 μm with CO 2. Such a resolution is the best ever obtained for straws of these dimensions. A study of the gas leakage for the straw system has been performed, and results are reported. The array is being commissioned as a subdetector of the FINUDA spectrometer, and tracking performances are being studied with cosmic rays.

  11. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals

    NASA Astrophysics Data System (ADS)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-03-01

    Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p  <  0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.

  12. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  13. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    PubMed

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  14. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  15. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    PubMed Central

    Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica

    2015-01-01

    Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411

  16. Tracking Topographic Changes from Multitemporal Stereo Images, Application to the Nili Patera Dune Field

    NASA Astrophysics Data System (ADS)

    Avouac, J.; Ayoub, F.; Bridges, N. T.; Leprince, S.; Lucas, A.

    2012-12-01

    The High Resolution Imaging Science Experiment (HiRISE) in orbit around Mars provides images with a nominal ground resolution of 25cm. Its agility allows imaging a same scene with stereo view angles thus allowing for for Digital elevation Model (DEM) extraction through stereo-photogrammetry. This dataset thus offers an exceptional opportunity to measure the topography with high precision and track its eventual evolution with time. In this presentation, we will discuss how multi-temporal acquisitions of HiRISE images of the Nili Patera dune field allow tracking ripples migration, assess sand fluxes and dunes activity. We investigated in particular the use of multi-temporal DEMs to monitor the migration and morphologic evolution of the dune field. We present here the methodology used and the various challenges that must be overcome to best exploit the multi-temporal images. Two DEMs were extracted from two stereo images pairs acquired 390 earth days apart in 2010-2011 using SOCET SET photogrammetry software, with a 1m post-spacing and a vertical accuracy of few tens of centimeters. Prior to comparison the DEMs registration, which was not precise enough out of SOCET-SET, was improved by wrapping the second DEM onto the first one using the bedrock only as a support for registration. The vertical registration residual was estimated at around 40cm RMSE and is mostly due to CCD misalignment and uncorrected spacecraft attitudes. Changes of elevation over time are usually determined from DEMs differentiation: provided that DEMs are perfectly registered and sampled on the same grid, this approach readily quantifies erosion and deposition processes. As the dunes have moved horizontally, they are not physically aligned anymore in the DEMs, and their morphologic evolution cannot be recovered easily from differentiating the DEMs. In this particular setting the topographic evolution is best recovered from correlation of the DEMs. We measure that the fastest dunes have migrated by up to 1meter per Earth year as a result of lee front deposition and stoss slope erosion. DEMs differentiation, after correction for horizontal migration, provides and additional information on dune morphology evolution. Some dunes show a vertical growth over the 390 days spanning the 2 DEMs, but we cannot exclude a bias due to the acquisition parameters. Indeed, the images of the two stereo pairs were acquired 22 and 5 days apart, respectively. During that time, the ripples laying on the dune surface have probably migrated. As the DEMs extraction is based on feature tracking and parallax, this difference in DEMs elevation may be only, or in part, due to the ripple migration between the acquisition times that biased the actual dune elevations.

  17. Spirit Near 'Stapledon' on Sol 1802 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11781 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11781

    NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this stereo, full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. South is at the center; north is at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches).

    Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica.

    The team laid plans to drive Spirit from this Sol 1802 location back up onto Home Plate, then southward for the rover's summer field season.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  18. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  19. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study.

    PubMed

    Shtark, Tomer; Gurfil, Pini

    2017-03-31

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control.

  20. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study

    PubMed Central

    Shtark, Tomer; Gurfil, Pini

    2017-01-01

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control. PMID:28362338

  1. What Are We Tracking ... and Why?

    NASA Astrophysics Data System (ADS)

    Suarez-Sola, I.; Davey, A.; Hourcle, J. A.

    2008-12-01

    What Are We Tracking ... and Why? It is impossible to define what adequate provenance is without knowing who is asking the question. What determines sufficient provenance information is not a function of the data, but of the question being asked of it. Many of these questions are asked by people not affiliated with the mission and possibly from different disciplines. To plan for every conceivable question would require a significant burden on the data systems that are designed to answer the mission's science objectives. Provenance is further complicated as each system might have a different definition of 'data set'. Is it the raw instrument results? Is it the result of numerical processing? Does it include the associated metadata? Does it include packaging? Depending on how a system defines 'data set', it may not be able to track provenance with sufficient granularity to ask the desired question, or we may end up with a complex web of relationships that significantly increases the system complexity. System designers must also remember that data archives are not a closed system. We need mechanisms for tracking not only the provenance relationships between data objects and the systems that generate them, but also from journal articles back to the data that was used to support the research. Simply creating a mirror of the data used, as done in other scientific disciplines, is unrealistic for terabyte and petabyte scale data sets. We present work by the Virtual Solar Observatory on the assignment of identifiers that could be used for tracking provenance and compare it to other proposed standards in the scientific and library science communities. We use the Solar Dynamics Observatory, STEREO and Hinode missions as examples where the concept of 'data set' breaks many systems for citing data.

  2. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

    PubMed Central

    Sawicki, Piotr

    2018-01-01

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679

  3. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.

    PubMed

    Gabara, Grzegorz; Sawicki, Piotr

    2018-03-06

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

  4. BRDF invariant stereo using light transport constancy.

    PubMed

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  5. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  6. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  7. Surveillance of medium and high Earth orbits using large baseline stereovision

    NASA Astrophysics Data System (ADS)

    Danescu, Radu; Ciurte, Anca; Oniga, Florin; Cristea, Octavian; Dolea, Paul; Dascal, Vlad; Turcu, Vlad; Mircea, Liviu; Moldovan, Dan

    2014-11-01

    The Earth is surrounded by a swarm of satellites and associated debris known as Resident Space Objects (RSOs). All RSOs will orbit the Earth until they reentry into Earth's atmosphere. There are three main RSO categories: Low Earth Orbit (LEO), when the satellite orbits at an altitude below 1 500 km; a Medium Earth Orbit (MEO) for Global Navigation Satellite Systems (GNSS) at an altitude of around 20 000 km, and a Geostationary Earth Orbit (GEO) (also sometimes called the Clarke orbit), for geostationary satellites, at an altitude of 36 000 km. The Geostationary Earth Orbits and the orbits of higher altitude are also known as High Earth Orbits (HEO). Crucial for keeping an eye on RSOs, the Surveillance of Space (SofS) comprises detection, tracking, propagation of orbital parameters, cataloguing and analysis of these objects. This paper presents a large baseline stereovision based approach for detection and ranging of RSO orbiting at medium to high altitudes. Two identical observation systems, consisting of camera, telescope, control computer and GPS receiver are located 37 km apart, and set to observe the same region of the sky. The telescopes are placed on equatorial mounts able to compensate for the Earth's rotation, so that the stars appear stationary in the acquired images, and the satellites will appear as linear streaks. The two cameras are triggered simultaneously. The satellite streaks are detected in each image of the stereo pair using its streak-like appearance against point-like stars, the motion of the streaks between successive frames, and the stereo disparity. The detected satellite pixels are then put into correspondence using the epipolar geometry, and the 3D position of the satellite in the Earth Center, Earth Fixed (ECEF) reference frame is computed using stereo triangulation. Preliminary tests have been performed, for both MEO and HEO orbits. The preliminary results indicate a very high detection rate for MEO orbits, and good detection rate for HEO orbits, dependent on the satellite's rotation.

  8. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  9. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  10. Three-dimensional displays and stereo vision

    PubMed Central

    Westheimer, Gerald

    2011-01-01

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023

  11. Opportunity View on Sols 1803 and 1804 Stereo

    NASA Image and Video Library

    2009-03-03

    NASA Mars Exploration Rover Opportunity combined images into this full-circle view of the rover surroundings. Tracks from the rover drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. You need 3D glasses.

  12. Opportunity View After Drive on Sol 1806 Stereo

    NASA Image and Video Library

    2009-03-03

    NASA Mars Exploration Rover Opportunity combined images into this full-circle view of the rover surroundings. Tracks from the rover drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. You need 3D glasses.

  13. Compact 3D Camera for Shake-the-Box Particle Tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  14. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    PubMed Central

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-01-01

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524

  15. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  16. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  17. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two

  18. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  19. Opportunity's View After Drive on Sol 1806 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  20. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  1. First Impressions of CARTOSAT-1

    NASA Technical Reports Server (NTRS)

    Lutes, James

    2007-01-01

    CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).

  2. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  3. The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow.

    PubMed

    Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A

    2016-07-01

    When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.

  4. The Solar Stormwatch CME catalogue: Results from the first space weather citizen science project

    NASA Astrophysics Data System (ADS)

    Barnard, L.; Scott, C.; Owens, M.; Lockwood, M.; Tucker-Hood, K.; Thomas, S.; Crothers, S.; Davies, J. A.; Harrison, R.; Lintott, C.; Simpson, R.; O'Donnell, J.; Smith, A. M.; Waterson, N.; Bamford, S.; Romeo, F.; Kukula, M.; Owens, B.; Savani, N.; Wilkinson, J.; Baeten, E.; Poeffel, L.; Harder, B.

    2014-12-01

    Solar Stormwatch was the first space weather citizen science project, the aim of which is to identify and track coronal mass ejections (CMEs) observed by the Heliospheric Imagers aboard the STEREO satellites. The project has now been running for approximately 4 years, with input from >16,000 citizen scientists, resulting in a data set of >38,000time-elongation profiles of CME trajectories, observed over 18 preselected position angles. We present our method for reducing this data set into a CME catalogue. The resulting catalogue consists of 144 CMEs over the period January 2007 to February 2010, of which 110 were observed by STEREO-A and 77 were observed by STEREO-B. For each CME, the time-elongation profiles generated by the citizen scientists are averaged into a consensus profile along each position angle that the event was tracked. We consider this catalogue to be unique, being at present the only citizen science-generated CME catalogue, tracking CMEs over an elongation range of 4° out to a maximum of approximately 70°. Using single spacecraft fitting techniques, we estimate the speed, direction, solar source region, and latitudinal width of each CME. This shows that at present, the Solar Stormwatch catalogue (which covers only solar minimum years) contains almost exclusively slow CMEs, with a mean speed of approximately 350 km s-1. The full catalogue is available for public access at www.met.reading.ac.uk/~spate/solarstormwatch. This includes, for each event, the unprocessed time-elongation profiles generated by Solar Stormwatch, the consensus time-elongation profiles, and a set of summary plots, as well as the estimated CME properties.

  5. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    PubMed

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Owls see in stereo much like humans do.

    PubMed

    van der Willigen, Robert F

    2011-06-10

    While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.

  7. Determination of Cloud Base Height, Wind Velocity, and Short-Range Cloud Structure Using Multiple Sky Imagers Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Schwartz, Stephen E.; Yu, Dantong

    Clouds are a central focus of the U.S. Department of Energy (DOE)’s Atmospheric System Research (ASR) program and Atmospheric Radiation Measurement (ARM) Climate Research Facility, and more broadly are the subject of much investigation because of their important effects on atmospheric radiation and, through feedbacks, on climate sensitivity. Significant progress has been made by moving from a vertically pointing (“soda-straw”) to a three-dimensional (3D) view of clouds by investing in scanning cloud radars through the American Recovery and Reinvestment Act of 2009. Yet, because of the physical nature of radars, there are key gaps in ARM's cloud observational capabilities. Formore » example, cloud radars often fail to detect small shallow cumulus and thin cirrus clouds that are nonetheless radiatively important. Furthermore, it takes five to twenty minutes for a cloud radar to complete a 3D volume scan and clouds can evolve substantially during this period. Ground-based stereo-imaging is a promising technique to complement existing ARM cloud observation capabilities. It enables the estimation of cloud coverage, height, horizontal motion, morphology, and spatial arrangement over an extended area of up to 30 by 30 km at refresh rates greater than 1 Hz (Peng et al. 2015). With fine spatial and temporal resolution of modern sky cameras, the stereo-imaging technique allows for the tracking of a small cumulus cloud or a thin cirrus cloud that cannot be detected by a cloud radar. With support from the DOE SunShot Initiative, the Principal Investigator (PI)’s team at Brookhaven National Laboratory (BNL) has developed some initial capability for cloud tracking using multiple distinctly located hemispheric cameras (Peng et al. 2015). To validate the ground-based cloud stereo-imaging technique, the cloud stereo-imaging field campaign was conducted at the ARM Facility’s Southern Great Plains (SGP) site in Oklahoma from July 15 to December 24. As shown in Figure 1, the cloud stereo-imaging system consisted of two inexpensive high-definition (HD) hemispheric cameras (each cost less than $1,500) and ARM’s Total Sky Imager (TSI). Together with other co-located ARM instrumentation, the campaign provides a promising opportunity to validate stereo-imaging-based cloud base height and, more importantly, to examine the feasibility of cloud thickness retrieval for low-view-angle clouds.« less

  8. Testbed for remote telepresence research

    NASA Astrophysics Data System (ADS)

    Adnan, Sarmad; Cheatham, John B., Jr.

    1992-11-01

    Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  9. Effect of Display Technology on Perceived Scale of Space.

    PubMed

    Geuss, Michael N; Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Mohler, Betty J

    2015-11-01

    Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. © 2015, Human Factors and Ergonomics Society.

  10. Stereo Imaging Miniature Endoscope

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Manohara, Harish; White, Victor; Shcheglov, Kirill V.; Shahinian, Hrayr

    2011-01-01

    Stereo imaging requires two different perspectives of the same object and, traditionally, a pair of side-by-side cameras would be used but are not feasible for something as tiny as a less than 4-mm-diameter endoscope that could be used for minimally invasive surgeries or geoexploration through tiny fissures or bores. The proposed solution here is to employ a single lens, and a pair of conjugated, multiple-bandpass filters (CMBFs) to separate stereo images. When a CMBF is placed in front of each of the stereo channels, only one wavelength of the visible spectrum that falls within the passbands of the CMBF is transmitted through at a time when illuminated. Because the passbands are conjugated, only one of the two channels will see a particular wavelength. These time-multiplexed images are then mixed and reconstructed to display as stereo images. The basic principle of stereo imaging involves an object that is illuminated at specific wavelengths, and a range of illumination wavelengths is time multiplexed. The light reflected from the object selectively passes through one of the two CMBFs integrated with two pupils separated by a baseline distance, and is focused onto the imaging plane through an objective lens. The passband range of CMBFs and the illumination wavelengths are synchronized such that each of the CMBFs allows transmission of only the alternate illumination wavelength bands. And the transmission bandwidths of CMBFs are complementary to each other, so that when one transmits, the other one blocks. This can be clearly understood if the wavelength bands are divided broadly into red, green, and blue, then the illumination wavelengths contain two bands in red (R1, R2), two bands in green (G1, G2), and two bands in blue (B1, B2). Therefore, when the objective is illuminated by R1, the reflected light enters through only the left-CMBF as the R1 band corresponds to the transmission window of the left CMBF at the left pupil. This is blocked by the right CMBF. The transmitted band is focused on the focal plane array (FPA).

  11. Focus and perspective adaptive digital surgical microscope: optomechanical design and experimental implementation

    NASA Astrophysics Data System (ADS)

    Claus, Daniel; Reichert, Carsten; Herkommer, Alois

    2017-05-01

    This paper relates to the improvement of conventional surgical stereo microscopy via the application of digital recording devices and adaptive optics. The research is aimed at improving the working conditions of the surgeon during the operation, such that free head movement is possible. The depth clues known from conventional stereo microscopy in interaction with the human eye's functionality, such as convergence, disparity, angular elevation, parallax, and accommodation, are implemented in a digital recording system via adaptive optomechanical components. Two laterally moving pupil apertures have been used mimicking the digital implementation of the eye's vergence and head motion. The natural eye's accommodation is mimicked via the application of a tunable lens. Additionally, another system has been built, which enables tracking the surgeon's eye pupil through a digital displaying stereoscopic microscope to supply the necessary information for steering the recording system. The optomechanical design and experimental results for both systems, digital recording stereoscopic microscope and pupil tracking system, are shown.

  12. Stereo Electro-optical Tracking System (SETS)

    NASA Astrophysics Data System (ADS)

    Koenig, E. W.

    1984-09-01

    The SETS is a remote, non-contacting, high-accuracy tracking system for the measurement of deflection of models in the National Transonic Facility at Langley Research Center. The system consists of four electronically scanned image dissector trackers which locate the position of Light Emitting Diodes embedded in the wing or body of aircraft models. Target location data is recorded on magnetic tape for later 3-D processing. Up to 63 targets per model may be tracked at typical rates of 1280 targets per second and to precision of 0.02mm at the target under the cold (-193 C) environment of the NTF tunnel.

  13. Study of a stereo electro-optical tracker system for the measurement of model deformations at the national transonic facility

    NASA Technical Reports Server (NTRS)

    Hertel, R. J.

    1979-01-01

    An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.

  14. Bayes filter modification for drivability map estimation with observations from stereo vision

    NASA Astrophysics Data System (ADS)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  15. Looking Back at Spirit Trail to the Summit Stereo

    NASA Image and Video Library

    2005-10-21

    Before moving on to explore more of Mars, NASA Mars Exploration Rover Spirit looked back at the long and winding trail of twin wheel tracks the rover created to get to the top of Husband Hill. 3D glasses are necessary to view this image.

  16. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.

  17. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676

  18. From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric

    1994-01-01

    In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.

  19. 3D Stereo Data Visualization and Representation

    DTIC Science & Technology

    1994-09-01

    will see a stereo image" (29:219). See (28) and (32) for more detail. "• Lenticular Display - The idea is stimulated by the limitation of parallax...barriers to replace the slits with a cylindrical lenses. According to Bruce Lane, "a particularly valuable feature of lenticular is the multiple viewing...the object close to the optical axis and place the object close to the spherical mirror’s focus length. 48 " Astigmatism - Astigmatism is the

  20. Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs

    NASA Astrophysics Data System (ADS)

    Coenen, M.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).

  1. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  2. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    PubMed

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  3. Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction

    DTIC Science & Technology

    2011-01-01

    Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed

  4. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  5. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  6. Three-dimensional information extraction from GaoFen-1 satellite images for landslide monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Shixin; Yang, Baolin; Zhou, Yi; Wang, Futao; Zhang, Rui; Zhao, Qing

    2018-05-01

    To more efficiently use GaoFen-1 (GF-1) satellite images for landslide emergency monitoring, a Digital Surface Model (DSM) can be generated from GF-1 across-track stereo image pairs to build a terrain dataset. This study proposes a landslide 3D information extraction method based on the terrain changes of slope objects. The slope objects are mergences of segmented image objects which have similar aspects; and the terrain changes are calculated from the post-disaster Digital Elevation Model (DEM) from GF-1 and the pre-disaster DEM from GDEM V2. A high mountain landslide that occurred in Wenchuan County, Sichuan Province is used to conduct a 3D information extraction test. The extracted total area of the landslide is 22.58 ha; the displaced earth volume is 652,100 m3; and the average sliding direction is 263.83°. The accuracies of them are 0.89, 0.87 and 0.95, respectively. Thus, the proposed method expands the application of GF-1 satellite images to the field of landslide emergency monitoring.

  7. Development and Long-Term Verification of Stereo Vision Sensor System for Controlling Safety at Railroad Crossing

    NASA Astrophysics Data System (ADS)

    Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko

    Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.

  8. Active Guidance of a Handheld Micromanipulator using Visual Servoing.

    PubMed

    Becker, Brian C; Voros, Sandrine; Maclachlan, Robert A; Hager, Gregory D; Riviere, Cameron N

    2009-05-12

    In microsurgery, a surgeon often deals with anatomical structures of sizes that are close to the limit of the human hand accuracy. Robotic assistants can help to push beyond the current state of practice by integrating imaging and robot-assisted tools. This paper demonstrates control of a handheld tremor reduction micromanipulator with visual servo techniques, aiding the operator by providing three behaviors: snap-to, motion-scaling, and standoff-regulation. A stereo camera setup viewing the workspace under high magnification tracks the tip of the micromanipulator and the desired target object being manipulated. Individual behaviors activate in task-specific situations when the micromanipulator tip is in the vicinity of the target. We show that the snap-to behavior can reach and maintain a position at a target with an accuracy of 17.5 ± 0.4μm Root Mean Squared Error (RMSE) distance between the tip and target. Scaling the operator's motions and preventing unwanted contact with non-target objects also provides a larger margin of safety.

  9. Small or far away? Size and distance perception in the praying mantis

    PubMed Central

    Bissianna, Geoffrey

    2016-01-01

    Stereo or ‘3D’ vision is an important but costly process seen in several evolutionarily distinct lineages including primates, birds and insects. Many selective advantages could have led to the evolution of stereo vision, including range finding, camouflage breaking and estimation of object size. In this paper, we investigate the possibility that stereo vision enables praying mantises to estimate the size of prey by using a combination of disparity cues and angular size cues. We used a recently developed insect 3D cinema paradigm to present mantises with virtual prey having differing disparity and angular size cues. We predicted that if they were able to use these cues to gauge the absolute size of objects, we should see evidence for size constancy where they would strike preferentially at prey of a particular physical size, across a range of simulated distances. We found that mantises struck most often when disparity cues implied a prey distance of 2.5 cm; increasing the implied distance caused a significant reduction in the number of strikes. We, however, found no evidence for size constancy. There was a significant interaction effect of the simulated distance and angular size on the number of strikes made by the mantis but this was not in the direction predicted by size constancy. This indicates that mantises do not use their stereo vision to estimate object size. We conclude that other selective advantages, not size constancy, have driven the evolution of stereo vision in the praying mantis. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269605

  10. ROS-based ground stereo vision detection: implementation and experiments.

    PubMed

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  11. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    NASA Astrophysics Data System (ADS)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  12. The Longitudinal Properties of a Solar Energetic Particle Event Investigated Using Modern Solar Imaging

    NASA Technical Reports Server (NTRS)

    Rouillard, A. P.; Sheeley, N.R. Jr.; Tylka, A.; Vourlidas, A.; Ng, C. K.; Rakowski, C.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Reames, D.; hide

    2012-01-01

    We use combined high-cadence, high-resolution, and multi-point imaging by the Solar-Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory to investigate the hour-long eruption of a fast and wide coronal mass ejection (CME) on 2011 March 21 when the twin STEREO spacecraft were located beyond the solar limbs. We analyze the relation between the eruption of the CME, the evolution of an Extreme Ultraviolet (EUV) wave, and the onset of a solar energetic particle (SEP) event measured in situ by the STEREO and near-Earth orbiting spacecraft. Combined ultraviolet and white-light images of the lower corona reveal that in an initial CME lateral "expansion phase," the EUV disturbance tracks the laterally expanding flanks of the CME, both moving parallel to the solar surface with speeds of approx 450 km/s. When the lateral expansion of the ejecta ceases, the EUV disturbance carries on propagating parallel to the solar surface but devolves rapidly into a less coherent structure. Multi-point tracking of the CME leading edge and the effects of the launched compression waves (e.g., pushed streamers) give anti-sunward speeds that initially exceed 900 km/s at all measured position angles. We combine our analysis of ultraviolet and white-light images with a comprehensive study of the velocity dispersion of energetic particles measured in situ by particle detectors located at STEREO-A (STA) and first Lagrange point (L1), to demonstrate that the delayed solar particle release times at STA and L1 are consistent with the time required (30-40 minutes) for the CME to perturb the corona over a wide range of longitudes. This study finds an association between the longitudinal extent of the perturbed corona (in EUV and white light) and the longitudinal extent of the SEP event in the heliosphere.

  13. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    PubMed Central

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  14. Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor Target Discovery and Tracking

    DTIC Science & Technology

    2010-01-01

    open garage leading to the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the UAV...camera, the difficulty in detecting and navigating around obstacles using this non- stereo camera necessitated a precomputed map of all obstacles and

  15. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Bodenstedt, S.; Reichard, D.; Suwelack, S.; Wagner, M.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.; Speidel, S.

    2015-03-01

    The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention using augmented reality (AR). To display preoperative data correctly, soft tissue deformations that occur during surgery have to be taken into consideration. Optical laparoscopic sensors, such as stereo endoscopes, can produce a 3D reconstruction of single stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just a single frame in general will not provide enough detail to register and update preoperative data due to ambiguities. In this paper, we propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. By using GPU-based methods we achieve near real-time performance. We evaluated the system on an ex-vivo porcine liver (4.21mm+/- 0.63) and on two synthetic silicone livers (3.64mm +/- 0.31 and 1.89mm +/- 0.19) using three different methods for estimating the camera pose (no tracking, optical tracking and a combination).

  16. Tracking Filament Evolution in the Low Solar Corona Using Remote Sensing and In Situ Observations

    NASA Astrophysics Data System (ADS)

    Kocher, Manan; Landi, Enrico; Lepri, Susan. T.

    2018-06-01

    In the present work, we analyze a filament eruption associated with an interplanetary coronal mass ejection that arrived at L1 on 2011 August 5. In multiwavelength Solar Dynamic Observatory/Advanced Imaging Assembly (AIA) images, three plasma parcels within the filament were tracked at high cadence along the solar corona. A novel absorption diagnostic technique was applied to the filament material traveling along the three chosen trajectories to compute the column density and temperature evolution in time. Kinematics of the filamentary material were estimated using STEREO/Extreme Ultraviolet Imager and STEREO/COR1 observations. The Michigan Ionization Code used inputs of these density, temperature, and speed profiles for the computation of ionization profiles of the filament plasma. Based on these measurements, we conclude that the core plasma was in near ionization equilibrium, and the ionization states were still evolving at the altitudes where they were visible in absorption in AIA images. Additionally, we report that the filament plasma was heterogeneous, and the filamentary material was continuously heated as it expanded in the low solar corona.

  17. Validation of Harris Detector and Eigen Features Detector

    NASA Astrophysics Data System (ADS)

    Kok, K. Y.; Rajendran, P.

    2018-05-01

    Harris detector is one of the most common features detection for applications such as object recognition, stereo matching and target tracking. In this paper, a similar Harris detector algorithm is written using MATLAB and the performance is compared with MATLAB built in Harris detector for validation. This is to ensure that rewritten version of Harris detector can be used for Unmanned Aerial Vehicle (UAV) application research purpose yet can be further improvised. Another corner detector close to Harris detector, which is Eigen features detector is rewritten and compared as well using same procedures with same purpose. The simulation results have shown that rewritten version for both Harris and Eigen features detectors have the same performance with MATLAB built in detectors with not more than 0.4% coordination deviation, less than 4% & 5% response deviation respectively, and maximum 3% computational cost error.

  18. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  19. Classification of road sign type using mobile stereo vision

    NASA Astrophysics Data System (ADS)

    McLoughlin, Simon D.; Deegan, Catherine; Fitzgerald, Conor; Markham, Charles

    2005-06-01

    This paper presents a portable mobile stereo vision system designed for the assessment of road signage and delineation (lines and reflective pavement markers or "cat's eyes"). This novel system allows both geometric and photometric measurements to be made on objects in a scene. Global Positioning System technology provides important location data for any measurements made. Using the system it has been shown that road signs can be classfied by nature of their reflectivity. This is achieved by examining the changes in the reflected light intensity with changes in range (facilitated by stereo vision). Signs assessed include those made from retro-reflective materials, those made from diffuse reflective materials and those made from diffuse reflective matrials with local illumination. Field-testing results demonstrate the systems ability to classify objects in the scene based on their reflective properties. The paper includes a discussion of a physical model that supports the experimental data.

  20. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  1. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  2. Synergistic surface current mapping by spaceborne stereo imaging and coastal HF radar

    NASA Astrophysics Data System (ADS)

    Matthews, John Philip; Yoshikawa, Yutaka

    2012-09-01

    Well validated optical and radar methods of surface current measurement at high spatial resolution (nominally <100 m) from space can greatly advance our ability to monitor earth's oceans, coastal zones, lakes and rivers. With interest growing in optical along-track stereo techniques for surface current and wave motion determinations, questions of how to interpret such data and how to relate them to measurements made by better validated techniques arise. Here we make the first systematic appraisal of surface currents derived from along-track stereo Sun glitter (ATSSG) imagery through comparisons with simultaneous synoptic flows observed by coastal HF radars working at frequencies of 13.9 and 24.5 MHz, which return averaged currents within surface layers of roughly 1 m and 2 m depth respectively. At our Tsushima Strait (Japan) test site, we found that these two techniques provided largely compatible surface current patterns, with the main difference apparent in current strength. Within the northwest (southern) comparison region, the magnitudes of the ATSSG current vectors derived for 13 August 2006 were on average 22% (40%) higher than the corresponding vectors for the 1-m (2-m) depth radar. These results reflect near-surface vertical current structure, differences in the flow components sensed by the two techniques and disparities in instrumental performance. The vertical profile constructed here from ATSSG, HF radar and ADCP data is the first to resolve downwind drift in the upper 2 m of the open ocean. The profile e-folding depth suggests Stokes drift from waves of 10-m wavelength visible in the images.

  3. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  4. Stereopsis cueing effects on hover-in-turbulence performance in a simulated rotorcraft

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.

    1990-01-01

    The efficacy of stereopsis cueing in pictorial displays was assessed in a real-time piloted simulation experiment of a rotorcraft precision hover-in-turbulence task. Seven pilots endeavored to maintain a hover by visually aligning a set of inner and outer wickets (major elements of a real-world pictorial display, thus attaining the desired hover position, in a full factorial experimental design. The display conditions examined included the presence or absence of a velocity display element (a velocity head-up display) as well as the stereopsis cueing conditions, which included non-stereo (binoptic or monoscopic - no depth cues other than those provided by a perspective, real-world display), stereo 3-D, and hyper stereo (telestereoscopic). Subjective and objective results indicated that the depth cues provided by the stereo displays enhanced the situational awareness of the pilot and enabled improved hover performance to be achieved. The velocity display element also improved the hover performance, with the best hover performance being achieved with the combined use of stereo and the velocity display element. Pilot control input data revealed that less control action was required to attain the improved hover performance with the stereo displays.

  5. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  6. Building Change Detection in Very High Resolution Satellite Stereo Image Time Series

    NASA Astrophysics Data System (ADS)

    Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.

    2016-06-01

    There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  7. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  8. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  9. Opportunity's Surroundings on Sol 1818 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view.

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  10. KSC-06pd2389

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower (right) begins to roll away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  11. KSC-06pd2388

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower begins to roll away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  12. KSC-06pd2390

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower (left) rolls away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  13. KSC-06pd2394

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The Delta II launch vehicle carrying the STEREO spacecraft hurtles through the smoke and steam after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. Liftoff was at 8:52 p.m. EDT. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results.

  14. KSC-06pd2401

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The Delta II rocket carrying the STEREO spacecraft on top streaks through the smoke as it climbs to orbit. Liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station was at 8:52 p.m. EDT. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results.

  15. Stereo-vision system for finger tracking in breast self-examination

    NASA Astrophysics Data System (ADS)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    Early detection of breast cancer, one of the leading causes of death by cancer for women in the US is key to any strategy designed to reduce breast cancer mortality. Breast self-examination (BSE) is considered as the most cost- effective approach available for early breast cancer detection because it is simple and non-invasive, and a large fraction of breast cancers are actually found by patients using this technique today. In BSE, the patient should use a proper search strategy to cover the whole breast region in order to detect al possible tumors. At present there is no objective approach or clinical data to evaluate the effectiveness of a particular BSE strategy. Even if a particular strategy is determined to be the most effective, training women to use it is still difficult because there is no objective way for them to know whether they are doing it correctly. We have developed a system using vision-based motion tracking technology to gather quantitative data about the breast palpation process for analysis of the BSE technique. By tracking position of the fingers, the system can provide the first objective quantitative data about the BSE process, and thus can improve our knowledge of the technique and help analyze its effectiveness. By visually displaying all the touched position information to the patient as the BSE is being conducted, the system can provide interactive feedback to the patient and create a prototype for a computer-based BSE training system. We propose to use color features, put them on the finger nails and track these features, because in breast palpation the background is the breast itself which is similar to the hand in color. This situation can hinder the ability/efficiency of other features if real time performance is required. To simplify feature extraction process, color transform is utilized instead of RGB values. Although the clinical environment will be well illuminated, normalization of color attributes is applied to compensate for minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.

  16. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  17. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  18. Driving in traffic: short-range sensing for urban collision avoidance

    NASA Astrophysics Data System (ADS)

    Thorpe, Chuck E.; Duggins, David F.; Gowdy, Jay W.; MacLaughlin, Rob; Mertz, Christoph; Siegel, Mel; Suppe, Arne; Wang, Chieh-Chih; Yata, Teruko

    2002-07-01

    Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for driving in urban areas. We need to sense cars and pedestrians and curbs and fire plugs and bicycles and lamp posts; we need to predict the paths of our own vehicle and of other moving objects; and we need to decide when to issue alerts or warnings to both the driver of our own vehicle and (potentially) to nearby pedestrians. No single sensor is currently able to detect and track all relevant objects. We are working with radar, ladar, stereo vision, and a novel light-stripe range sensor. We have installed a subset of these sensors on a city bus, driving through the streets of Pittsburgh on its normal runs. We are using different kinds of data fusion for different subsets of sensors, plus a coordinating framework for mapping objects at an abstract level.

  19. WPSS: watching people security services

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith

    2013-10-01

    To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.

  20. An image engineering system for the inspection of transparent construction materials

    NASA Astrophysics Data System (ADS)

    Hinz, S.; Stephani, M.; Schiemann, L.; Zeller, K.

    This article presents a modular photogrammetric recording and image analysis system for inspecting the material characteristics of transparent foils, in particular Ethylen-TetraFluorEthylen-Copolymer (ETFE) foils. The foils are put under increasing air pressure and are observed by a stereo camera system. Determining the time-variable 3D shape of transparent material imposes a number of challenges: especially the automatic point transfer between stereo images and, in temporal domain, from one image pair to the next. We developed an automatic approach that accommodates for these particular circumstances and allows reconstruction of the 3D shape for each epoch as well as determining 3D translation vectors between epochs by feature tracking. Examples including numerical results and accuracy measures prove the applicability of the system.

  1. Automated dynamic feature tracking of RSLs on the Martian surface through HiRISE super-resolution restoration and 3D reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.

    2017-09-01

    In this paper, we demonstrate novel Super-resolution restoration and 3D reconstruction tools developed within the EU FP7 projects and their applications to advanced dynamic feature tracking through HiRISE repeat stereo. We show an example with one of the RSL sites in the Palikir Crater took 8 repeat-pass 25cm HiRISE images from which a 5cm RSL-free SRR image is generated using GPT-SRR. Together with repeat 3D modelling of the same area, it allows us to overlay tracked dynamic features onto the reconstructed "original" surface, providing a much more comprehensive interpretation of the surface formation processes in 3D.

  2. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  3. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  4. Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections

    NASA Astrophysics Data System (ADS)

    Ziock, K. P.; Boehnen, C. B.; Ernst, J. M.; Fabris, L.; Hayward, J. P.; Karnowski, T. P.; Paquit, V. C.; Patlolla, D. R.; Trombino, D. G.

    2016-01-01

    Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. The complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.

  5. Real-time geometry-aware augmented reality in minimally invasive surgery.

    PubMed

    Chen, Long; Tang, Wen; John, Nigel W

    2017-10-01

    The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.

  6. Quantifying cortical surface harmonic deformation with stereovision during open cranial neurosurgery

    NASA Astrophysics Data System (ADS)

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Paulsen, Keith D.

    2012-02-01

    Cortical surface harmonic motion during open cranial neurosurgery is well observed in image-guided neurosurgery. Recently, we quantified cortical surface deformation noninvasively with synchronized blood pressure pulsation (BPP) from a sequence of stereo image pairs using optical flow motion tracking. With three subjects, we found the average cortical surface displacement can reach more than 1 mm and in-plane principal strains of up to 7% relative to the first image pair. In addition, the temporal changes in deformation and strain were in concert with BPP and patient respiration [1]. However, because deformation was essentially computed relative to an arbitrary reference, comparing cortical surface deformation at different times was not possible. In this study, we extend the technique developed earlier by establishing a more reliable reference profile of the cortical surface for each sequence of stereo image acquisitions. Specifically, fast Fourier transform (FFT) was applied to the dynamic cortical surface deformation, and the fundamental frequencies corresponding to patient respiration and BPP were identified, which were used to determine the number of image acquisitions for use in averaging cortical surface images. This technique is important because it potentially allows in vivo characterization of soft tissue biomechanical properties using intraoperative stereovision and motion tracking.

  7. Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections

    DOE PAGES

    Ziock, Klaus -Peter; Boehnen, Chris Bensing; Ernst, Joseph M.; ...

    2015-09-05

    Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combinedmore » gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. Here,the complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.« less

  8. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things

    PubMed Central

    Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004

  9. Study of 3D bathymetry modelling using LAPAN Surveillance Unmanned Aerial Vehicle 02 (LSU-02) photo data with stereo photogrammetry technique, Wawaran Beach, Pacitan, East Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Sari, N. M.; Nugroho, J. T.; Chulafak, G. A.; Kushardono, D.

    2018-05-01

    Coastal is an ecosystem that has unique object and phenomenon. The potential of the aerial photo data with very high spatial resolution covering coastal area is extensive. One of the aerial photo data can be used is LAPAN Surveillance UAV 02 (LSU-02) photo data which is acquired in 2016 with a spatial resolution reaching 10cm. This research aims to create an initial bathymetry model with stereo photogrammetry technique using LSU-02 data. In this research the bathymetry model was made by constructing 3D model with stereo photogrammetry technique that utilizes the dense point cloud created from overlapping of those photos. The result shows that the 3D bathymetry model can be built with stereo photogrammetry technique. It can be seen from the surface and bathymetry transect profile.

  10. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.

  11. Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David

    2003-01-01

    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).

  12. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  13. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  14. ARRIVAL TIME CALCULATION FOR INTERPLANETARY CORONAL MASS EJECTIONS WITH CIRCULAR FRONTS AND APPLICATION TO STEREO OBSERVATIONS OF THE 2009 FEBRUARY 13 ERUPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moestl, C.; Rollett, T.; Temmer, M.

    2011-11-01

    One of the goals of the NASA Solar TErestrial RElations Observatory (STEREO) mission is to study the feasibility of forecasting the direction, arrival time, and internal structure of solar coronal mass ejections (CMEs) from a vantage point outside the Sun-Earth line. Through a case study, we discuss the arrival time calculation of interplanetary CMEs (ICMEs) in the ecliptic plane using data from STEREO/SECCHI at large elongations from the Sun in combination with different geometric assumptions about the ICME front shape [fixed-{Phi} (FP): a point and harmonic mean (HM): a circle]. These forecasting techniques use single-spacecraft imaging data and are basedmore » on the assumption of constant velocity and direction. We show that for the slow (350 km s{sup -1}) ICME on 2009 February 13-18, observed at quadrature by the two STEREO spacecraft, the results for the arrival time given by the HM approximation are more accurate by 12 hr than those for FP in comparison to in situ observations of solar wind plasma and magnetic field parameters by STEREO/IMPACT/PLASTIC, and by 6 hr for the arrival time at Venus Express (MAG). We propose that the improvement is directly related to the ICME front shape being more accurately described by HM for an ICME with a low inclination of its symmetry axis to the ecliptic. In this case, the ICME has to be tracked to >30{sup 0} elongation to obtain arrival time errors < {+-} 5 hr. A newly derived formula for calculating arrival times with the HM method is also useful for a triangulation technique assuming the same geometry.« less

  15. Biomass Retrieval from L-Band Polarimetric UAVSAR Backscatter and PRISM Stereo Imagery

    NASA Technical Reports Server (NTRS)

    Zhang, Zhiyu; Ni, Wenjian; Sun, Guoqing; Huang, Wenli; Ranson, Kenneth J.; Cook, Bruce D.; Guo, Zhifeng

    2017-01-01

    The forest above-ground biomass (AGB) and spatial distribution of vegetation elements have profound effects on the productivity and biodiversity of terrestrial ecosystems. In this paper, we evaluated biomass estimation from L-band Synthetic Aperture Radar (SAR) data acquired by National Aeronautics and Space Administration (NASA) Uninhabited Aerial Vehicle SAR (UAVSAR) and the improvement of accuracy by adding canopy height information derived from stereo imagery acquired by Japan Aerospace Exploration Agency (JAXA) Panchromatic Remote Sensing Instrument for Stereo Mapping (PRISM) on-board the Advanced Land Observing Satellite (ALOS). Various models for prediction of forest biomass from UAVSAR data were investigated at pixel sizes of 1/4 ha (50 m x 50 m) and 1 ha. The variance inflation factor (VIF) was calculated for each of the explanatory variables in multivariable regression models to assess the multi-collinearity between explanatory variables. In addition, the t-and p-values were used to interpret the significance of the coefficients of each explanatory variables. The R(exp. 2), Root Mean Square Error (RMSE), bias and Akaike information criterion (AIC), and leave-one-out cross-validation (LOOCV) and bootstrapping were used to validate models. At 1/4-ha scale, the R(exp. 2) and RMSE of biomass estimation from a model using a single track of polarimetric UAVSAR data were 0.59 and 52.08 Mg/ha. With canopy height from PRISM as additional independent variable, R(exp. 2) increased to 0.76 and RMSE decreased to 39.74 Mg/ha (28.24%). At 1-ha scale, the RMSE of biomass estimation based on UAVSAR data of a single track was 39.42 Mg/ha with a R(exp. 2) of 0.77. With the canopy height from PRISM, R(exp. 2) increased to 0.86 and RMSE decreased to 29.47 Mg/ha (20.18%). The models using UAVSAR data alone underestimated biomass at levels above approximately 150 Mg/ha showing the saturation phenomenon. Adding canopy height from PRISM stereo imagery significantly improved the biomass estimation and elevated the saturation level in estimating biomass. Combined use of UAVSAR data acquired from opposite directions (odd and even tracks) slightly improved the biomass estimation.Combined use of UAVSAR data acquired from opposite directions (odd and even tracks) slightly improved the biomass estimation at 1/4-ha scale, R(exp. 2) increased from 0.59 to 0.66 and RMSE reduced from 52.08 to 48.57 Mg/ha. Averaging multiple acquisitions of UAVSAR data from the same look azimuth direction did not improve biomass estimation. A biomass map derived from NASA's LVIS (Laser Vegetation Imaging System) wave-form data was used as a reference for evaluation of the biomass maps from these models. The study has also shown that the errors decreased when deciduous, evergreen, and mixed forests were modeled separately but the improvement was not significant

  16. Three-channel dynamic photometric stereo: a new method for 4D surface reconstruction and volume recovery

    NASA Astrophysics Data System (ADS)

    Schroeder, Walter; Schulze, Wolfram; Wetter, Thomas; Chen, Chi-Hsien

    2008-08-01

    Three-dimensional (3D) body surface reconstruction is an important field in health care. A popular method for this purpose is laser scanning. However, using Photometric Stereo (PS) to record lumbar lordosis and the surface contour of the back poses a viable alternative due to its lower costs and higher flexibility compared to laser techniques and other methods of three-dimensional body surface reconstruction. In this work, we extended the traditional PS method and proposed a new method for obtaining surface and volume data of a moving object. The principle of traditional Photometric Stereo uses at least three images of a static object taken under different light sources to obtain 3D information of the object. Instead of using normal light, the light sources in the proposed method consist of the RGB-Color-Model's three colors: red, green and blue. A series of pictures taken with a video camera can now be separated into the different color channels. Each set of the three images can then be used to calculate the surface normals as a traditional PS. This method waives the requirement that the object imaged must be kept still as in almost all the other body surface reconstruction methods. By putting two cameras opposite to a moving object and lighting the object with the colored light, the time-varying surface (4D) data can easily be calculated. The obtained information can be used in many medical fields such as rehabilitation, diabetes screening or orthopedics.

  17. Combining Stereo SECCHI COR2 and HI1 Images for Automatic CME Front Edge Tracking

    NASA Technical Reports Server (NTRS)

    Kirnosov, Vladimir; Chang, Lin-Ching; Pulkkinen, Antti

    2016-01-01

    COR2 coronagraph images are the most commonly used data for coronal mass ejection (CME) analysis among the various types of data provided by the STEREO (Solar Terrestrial Relations Observatory) SECCHI (Sun-Earth Connection Coronal and Heliospheric Investigation) suite of instruments. The field of view (FOV) in COR2 images covers 215 solar radii (Rs) that allow for tracking the front edge of a CME in its initial stage to forecast the lead-time of a CME and its chances of reaching the Earth. However, estimating the lead-time of a CME using COR2 images gives a larger lead-time, which may be associated with greater uncertainty. To reduce this uncertainty, CME front edge tracking should be continued beyond the FOV of COR2 images. Therefore, heliospheric imager (HI1) data that covers 1590 Rs FOV must be included. In this paper, we propose a novel automatic method that takes both COR2 and HI1 images into account and combine the results to track the front edges of a CME continuously. The method consists of two modules: pre-processing and tracking. The pre-processing module produces a set of segmented images, which contain the signature of a CME, for both COR2 and HI1 separately. In addition, the HI1 images are resized and padded, so that the center of the Sun is the central coordinate of the resized HI1 images. The resulting COR2 andHI1 image set is then fed into the tracking module to estimate the position angle (PA) and track the front edge of a CME. The detected front edge is then used to produce a height-time profile that is used to estimate the speed of a CME. The method was validated using 15 CME events observed in the period from January 1, 2008 to August 31, 2009. The results demonstrate that the proposed method is effective for CME front edge tracking in both COR2 and HI1 images. Using this method, the CME front edge can now be tracked automatically and continuously in a much larger range, i.e., from 2 to 90 Rs, for the first time. These improvement scan greatly help in making the quantitative CME analysis more accurate and have the potential to assist in space weather forecasting.

  18. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  19. Extraction and textural characterization of above-ground areas from aerial stereo pairs: a quality assessment

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Dissard, O.; Jamet, O.; Maître, H.

    Above-ground analysis is a key point to the reconstruction of urban scenes, but it is a difficult task because of the diversity of the involved objects. We propose a new method to above-ground extraction from an aerial stereo pair, which does not require any assumption about object shape or nature. A Digital Surface Model is first produced by a stereoscopic matching stage preserving discontinuities, and then processed by a region-based Markovian classification algorithm. The produced above-ground areas are finally characterized as man-made or natural according to the grey level information. The quality of the results is assessed and discussed.

  20. The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.

    PubMed

    Chandraker, Manmohan

    2016-07-01

    Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.

  1. MEDIASSIST: medical assistance for intraoperative skill transfer in minimally invasive surgery using augmented reality

    NASA Astrophysics Data System (ADS)

    Sudra, Gunther; Speidel, Stefanie; Fritz, Dominik; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2007-03-01

    Minimally invasive surgery is a highly complex medical discipline with various risks for surgeon and patient, but has also numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate with these new problems, we propose to support the surgeon's spatial cognition by using augmented reality (AR) techniques to directly visualize virtual objects in the surgical site. In order to generate an intelligent support, it is necessary to have an intraoperative assistance system that recognizes the surgical skills during the intervention and provides context-aware assistance surgeon using AR techniques. With MEDIASSIST we bundle our research activities in the field of intraoperative intelligent support and visualization. Our experimental setup consists of a stereo endoscope, an optical tracking system and a head-mounted-display for 3D visualization. The framework will be used as platform for the development and evaluation of our research in the field of skill recognition and context-aware assistance generation. This includes methods for surgical skill analysis, skill classification, context interpretation as well as assistive visualization and interaction techniques. In this paper we present the objectives of MEDIASSIST and first results in the fields of skill analysis, visualization and multi-modal interaction. In detail we present a markerless instrument tracking for surgical skill analysis as well as visualization techniques and recognition of interaction gestures in an AR environment.

  2. Fault-tolerant feature-based estimation of space debris rotational motion during active removal missions

    NASA Astrophysics Data System (ADS)

    Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo

    2018-05-01

    One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.

  3. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  4. Interactive stereo electron microscopy enhanced with virtual reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-12-17

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicronmore » diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the measurement system. After calibration, the system is used to take distance and angle measurements of clinical specimens.« less

  5. The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging.

    PubMed

    Clarkson, Matthew J; Zombori, Gergely; Thompson, Steve; Totz, Johannes; Song, Yi; Espak, Miklos; Johnsen, Stian; Hawkes, David; Ourselin, Sébastien

    2015-03-01

    To perform research in image-guided interventions, researchers need a wide variety of software components, and assembling these components into a flexible and reliable system can be a challenging task. In this paper, the NifTK software platform is presented. A key focus has been high-performance streaming of stereo laparoscopic video data, ultrasound data and tracking data simultaneously. A new messaging library called NiftyLink is introduced that uses the OpenIGTLink protocol and provides the user with easy-to-use asynchronous two-way messaging, high reliability and comprehensive error reporting. A small suite of applications called NiftyGuide has been developed, containing lightweight applications for grabbing data, currently from position trackers and ultrasound scanners. These applications use NiftyLink to stream data into NiftyIGI, which is a workstation-based application, built on top of MITK, for visualisation and user interaction. Design decisions, performance characteristics and initial applications are described in detail. NiftyLink was tested for latency when transmitting images, tracking data, and interleaved imaging and tracking data. NiftyLink can transmit tracking data at 1,024 frames per second (fps) with latency of 0.31 milliseconds, and 512 KB images with latency of 6.06 milliseconds at 32 fps. NiftyIGI was tested, receiving stereo high-definition laparoscopic video at 30 fps, tracking data from 4 rigid bodies at 20-30 fps and ultrasound data at 20 fps with rendering refresh rates between 2 and 20 Hz with no loss of user interaction. These packages form part of the NifTK platform and have proven to be successful in a variety of image-guided surgery projects. Code and documentation for the NifTK platform are available from http://www.niftk.org . NiftyLink is provided open-source under a BSD license and available from http://github.com/NifTK/NiftyLink . The code for this paper is tagged IJCARS-2014.

  6. Autonomous Rock Tracking and Acquisition from a Mars Rover

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Nesnas, Issa A.; Das, Hari

    1999-01-01

    Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lei; Feng, Li; Liu, Siming

    We present a detailed study of an Earth-directed coronal mass ejection (full-halo CME) event that happened on 2011 February 15, making use of white-light observations by three coronagraphs and radio observations by Wind /WAVES. We applied three different methods to reconstruct the propagation direction and traveling distance of the CME and its driven shock. We measured the kinematics of the CME leading edge from white-light images observed by Solar Terrestrial Relations Observatory ( STEREO ) A and B , tracked the CME-driven shock using the frequency drift observed by Wind /WAVES together with an interplanetary density model, and obtained themore » equivalent scattering centers of the CME by the polarization ratio (PR) method. For the first time, we applied the PR method to different features distinguished from LASCO/C2 polarimetric observations and calculated their projections onto white-light images observed by STEREO-A and STEREO-B . By combining the graduated cylindrical shell (GCS) forward modeling with the PR method, we proposed a new GCS-PR method to derive 3D parameters of a CME observed from a single perspective at Earth. Comparisons between different methods show a good degree of consistence in the derived 3D results.« less

  8. KSC-06pd2391

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - After the mobile service tower has rolled away, the Delta II rocket with the STEREO spacecraft at top stands alone next to the launch gantry. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  9. ASTER DEM performance

    USGS Publications Warehouse

    Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.

    2005-01-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data.

  10. Robust feature tracking for endoscopic pose estimation and structure recovery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.

    2013-03-01

    Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.

  11. An Aggregated Method for Determining Railway Defects and Obstacle Parameters

    NASA Astrophysics Data System (ADS)

    Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat

    2018-03-01

    The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.

  12. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    NASA Astrophysics Data System (ADS)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  13. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2014-01-01

    Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845

  14. The effect of monocular target blur on simulated telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Liu, Andrew; Stark, Lawrence

    1991-01-01

    A simulation involving three types of telerobotic tasks that require information about the spatial position of objects is reported. This is similar to the results of psychophysical experiments examining the effect of blur on stereoacuity. It is suggested that other psychophysical experimental results could be used to predict operator performance for other telerobotic tasks. It is demonstrated that refractive errors in the helmet-mounted stereo display system can affect performance in the three types of telerobotic tasks. The results of two sets of experiments indicate that monocular target blur of two diopters or more degrades stereo display performance to the level of monocular displays. This indicates that moderate levels of visual degradation that affect the operator's stereoacuity may eliminate the performance advantage of stereo displays.

  15. SVM based colon polyps classifier in a wireless active stereo endoscope.

    PubMed

    Ayoub, J; Granado, B; Mhanna, Y; Romain, O

    2010-01-01

    This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.

  16. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  17. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    NASA Astrophysics Data System (ADS)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  18. Augmented reality system for CT-guided interventions: system description and initial phantom trials

    NASA Astrophysics Data System (ADS)

    Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.

    2003-05-01

    We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.

  19. MRO CTX-based Digital Terrain Models

    NASA Astrophysics Data System (ADS)

    Dumke, Alexander

    2016-04-01

    In planetary surface sciences, digital terrain models (DTM) are paramount when it comes to understanding and quantifying processes. In this contribution an approach for the derivation of digital terrain models from stereo images of the NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) are described. CTX consists of a 350 mm focal length telescope and 5000 CCD sensor elements and is operated as pushbroom camera. It acquires images with ~6 m/px over a swath width of ~30 km of the Mars surface [1]. Today, several approaches for the derivation of CTX DTMs exist [e. g. 2, 3, 4]. The discussed approach here is based on established software and combines them with proprietary software as described below. The main processing task for the derivation of CTX stereo DTMs is based on six steps: (1) First, CTX images are radiometrically corrected using the ISIS software package [5]. (2) For selected CTX stereo images, exterior orientation data from reconstructed NAIF SPICE data are extracted [6]. (3) In the next step High Resolution Stereo Camera (HRSC) DTMs [7, 8, 9] are used for the rectification of CTX stereo images to reduce the search area during the image matching. Here, HRSC DTMs are used due to their higher spatial resolution when compared to MOLA DTMs. (4) The determination of coordinates of homologous points between stereo images, i.e. the stereo image matching process, consists of two steps: first, a cross-correlation to obtain approximate values and secondly, their use in a least-square matching (LSM) process in order to obtain subpixel positions. (5) The stereo matching results are then used to generate object points from forward ray intersections. (6) As a last step, the DTM-raster generation is performed using software developed at the German Aerospace Center, Berlin. Whereby only object points are used that have a smaller error than a threshold value. References: [1] Malin, M. C. et al., 2007, JGR 112, doi:10.1029/2006JE002808 [2] Broxton, M. J. et al., 2008, LPSC XXXIX, Abstract#2419 [3] Yershov, V. et al., 2015 EPSC 10, EPSC2015-343 [4] Kim, J. R. et al., 2013 EPS 65, 799-809 [5] https://isis.astrogeology.usgs.gov/index.html [6] http://naif.jpl.nasa.gov/naif/index.html [7] Gwinner et al., 2010, EPS 294, 543-540 [8] Gwinner et al., 2015, PSS [9] Dumke, A. et al., 2008, ISPRS, 37, Part B4, 1037-1042

  20. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  1. Extracting Semantic Building Models from Aerial Stereo Images and Conversion to Citygml

    NASA Astrophysics Data System (ADS)

    Sengul, A.

    2012-07-01

    The collection of geographic data is of primary importance for the creation and maintenance of a GIS. Traditionally the acquisition of 3D information has been the task of photogrammetry using aerial stereo images. Digital photogrammetric systems employ sophisticated software to extract digital terrain models or to plot 3D objects. The demand for 3D city models leads to new applications and new standards. City Geography Mark-up Language (CityGML), a concept for modelling and exchange of 3D city and landscape models, defines the classes and relations for the most relevant topographic objects in cities and regional models with respect to their geometrical, topological, semantically and topological properties. It now is increasingly accepted, since it fulfils the prerequisites required e.g. for risk analysis, urban planning, and simulations. There is a need to include existing 3D information derived from photogrammetric processes in CityGML databases. In order to filling the gap, this paper reports on a framework transferring data plotted by Erdas LPS and Stereo Analyst for ArcGIS software to CityGML using Safe Software's Feature Manupulate Engine (FME)

  2. Stereo-Based Region-Growing using String Matching

    NASA Technical Reports Server (NTRS)

    Mandelbaum, Robert; Mintz, Max

    1995-01-01

    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.

  3. An automatic eye detection and tracking technique for stereo video sequences

    NASA Astrophysics Data System (ADS)

    Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim

    2009-05-01

    Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.

  4. Practical low-cost stereo head-mounted display

    NASA Astrophysics Data System (ADS)

    Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.

    1991-08-01

    A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.

  5. The three-dimensional analysis of hinode polar jets using images from LASCO C2, the STEREO COR2 coronagraphs, and SMEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, H.-S.; Jackson, B. V.; Buffington, A.

    2014-04-01

    Images recorded by the X-ray Telescope on board the Hinode spacecraft are used to provide high-cadence observations of solar jetting activity. A selection of the brightest of these polar jets shows a positive correlation with high-speed responses traced into the interplanetary medium. LASCO C2 and STEREO COR2 coronagraph images measure the coronal response to some of the largest jets, and also the nearby background solar wind velocity, thereby giving a determination of their speeds that we compare with Hinode observations. When using the full Solar Mass Ejection Imager (SMEI) data set, we track these same high-speed solar jet responses intomore » the inner heliosphere and from these analyses determine their mass, flow energies, and the extent to which they retain their identity at large solar distances.« less

  6. The cylindrical GEM detector of the KLOE-2 experiment

    NASA Astrophysics Data System (ADS)

    Bencivenni, G.; Branchini, P.; Ciambrone, P.; Czerwinski, E.; De Lucia, E.; Di Cicco, A.; Domenici, D.; Felici, G.; Fermani, P.; Morello, G.

    2017-07-01

    The KLOE-2 experiment started its data taking campaign in November 2014 with an upgraded tracking system at the DAΦNE electron-positron collider at the Frascati National Laboratory of INFN. The new tracking device, the Inner Tracker, operated together with the KLOE-2 Drift Chamber, has been installed to improve track and vertex reconstruction capabilities of the experimental apparatus. The Inner Tracker is a cylindrical GEM detector composed of four cylindrical triple-GEM detectors, each provided with an X-V strips-pads stereo readout. Although GEM detectors are already used in high energy physics experiments, this device is considered a frontier detector due to its fully-cylindrical geometry: KLOE-2 is the first experiment benefiting of this novel detector technology. Alignment and calibration of this detector will be presented together with its operating performance and reconstruction capabilities.

  7. A neural network z-vertex trigger for Belle II

    NASA Astrophysics Data System (ADS)

    Neuhaus, S.; Skambraks, S.; Abudinen, F.; Chen, Y.; Feindt, M.; Frühwirth, R.; Heck, M.; Kiesling, C.; Knoll, A.; Paul, S.; Schieck, J.

    2015-05-01

    We present the concept of a track trigger for the Belle II experiment, based on a neural network approach, that is able to reconstruct the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger will thus be able to suppress a large fraction of the dominating background from events outside of the interaction region. The trigger uses the drift time information of the hits from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (sectors), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D (r — φ) track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track in a given event. Within each sector, the z-vertex of the associated track is estimated by a specialized neural network, with a continuous output corresponding to the scaled z-vertex. The input values for the neural network are calculated from the wire hits of the CDC.

  8. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  9. Automated transient detection in the STEREO Heliospheric Imagers.

    NASA Astrophysics Data System (ADS)

    Barnard, Luke; Scott, Chris; Owens, Mat; Lockwood, Mike; Tucker-Hood, Kim; Davies, Jackie

    2014-05-01

    Since the launch of the twin STEREO satellites, the heliospheric imagers (HI) have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out far into the heliosphere. A frequently used approach is to build a "J-map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. Therefore, it is desirable to develop an automated algorithm for the detection and tracking of the transient features observed in HI data. This is to some extent previously covered ground, as similar problems have been encountered in the analysis of coronagraph data and have led to the development of products such as CACtus etc. We present the results of our investigation into the automated detection of solar transients observed in J-maps formed from HI data. We use edge and line detection methods to identify transients in the J-maps, and then use kinematic models of the solar transient propagation (such as the fixed-phi and harmonic mean geometric models) to estimate the solar transients properties, such as transient speed and propagation direction, from the time-elongation profile. The effectiveness of this process is assessed by comparison of our results with a set of manually identified CMEs, extracted and analysed by the Solar Storm Watch Project. Solar Storm Watch is a citizen science project in which solar transients are identified in J-maps formed from HI data and tracked multiple times by different users. This allows the calculation of a consensus time-elongation profile for each event, and therefore does not suffer from the potential subjectivity of an individual researcher tracking an event. Furthermore, we present preliminary results regarding the estimation of the ambient solar wind speed from the automated analysis of the HI J-maps, by the tracking of numerous small scale features entrained into the ambient solar wind, which can only be tracked out to small elongations.

  10. Tracking Prominence Eruptions to 1 AU with STEREO Heliospheric Imaging

    NASA Astrophysics Data System (ADS)

    Wood, B. E.; Howard, R.; Linton, M.

    2015-12-01

    It is rare for prominence eruptions to be observable far from the Sun in the inner heliosphere, either in imaging or with in situ plasma instruments. Nevertheless, we here discuss two examples of particularly bright eruptions that are continuously trackable all the way to 1 AU by imagers on the Solar TErrestrial RElations Observatory (STEREO) spacecraft. The two events are from 2011 June 7 and 2012 August 31. Only these two examples of clear prominence eruptions observable this far from the Sun could be found in the STEREO 2007-2014 image database, consistent with the rarity of unambiguous cold prominence material being observed in situ at 1 AU. Full 3-D reconstructions are made of the coronal mass ejections (CMEs) that accompany the prominence eruptions. For the 2011 June event, a time-dependent 3-D reconstruction of the prominence structure is made using point-by-point triangulation, which unfortunately is not possible for the August event due to a poor viewing geometry. However, for the 2012 August event, shock normals computed from plasma measurements at STEREO-B and Wind using the shock jump conditions agree well with expectations from the image-based CME reconstruction. Unlike its accompanying CME, the 2011 June prominence exhibits little deceleration from the Sun to 1 AU, as a consequence moving upwards within the CME. Detailed analysis of the prominence's expansion reveals that deviation from self-similar expansion is never large, but close to the Sun the prominence expands somewhat more rapidly than self-similarity, with this effect decreasing with time.

  11. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The center of the view is toward the south-southwest.

    The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau.

    Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Synergy of stereo cloud top height and ORAC optimal estimation cloud retrieval: evaluation and application to AATSR

    NASA Astrophysics Data System (ADS)

    Fisher, Daniel; Poulsen, Caroline A.; Thomas, Gareth E.; Muller, Jan-Peter

    2016-03-01

    In this paper we evaluate the impact on the cloud parameter retrievals of the ORAC (Optimal Retrieval of Aerosol and Cloud) algorithm following the inclusion of stereo-derived cloud top heights as a priori information. This is performed in a mathematically rigorous way using the ORAC optimal estimation retrieval framework, which includes the facility to use such independent a priori information. Key to the use of a priori information is a characterisation of their associated uncertainty. This paper demonstrates the improvements that are possible using this approach and also considers their impact on the microphysical cloud parameters retrieved. The Along-Track Scanning Radiometer (AATSR) instrument has two views and three thermal channels, so it is well placed to demonstrate the synergy of the two techniques. The stereo retrieval is able to improve the accuracy of the retrieved cloud top height when compared to collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), particularly in the presence of boundary layer inversions and high clouds. The impact of the stereo a priori information on the microphysical cloud properties of cloud optical thickness (COT) and effective radius (RE) was evaluated and generally found to be very small for single-layer clouds conditions over open water (mean RE differences of 2.2 (±5.9) microns and mean COD differences of 0.5 (±1.8) for single-layer ice clouds over open water at elevations of above 9 km, which are most strongly affected by the inclusion of the a priori).

  13. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  14. Stereo-hologram in discrete depth of field (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lee, Kwanghoon; Park, Min-Chul

    2017-05-01

    In holographic space, continuous object space can be divided as several discrete spaces satisfied each of same depth of field (DoF). In the environment of wearable device using holography, specially, this concept can be applied to macroscopy filed in contrast of the field of microscopy. Since the former has not need to high depth resolution because perceiving power of eye in human visual system, it can distinguish clearly among the objects in depth space, has lower than optical power of microscopic field. Therefore continuous but discrete depth of field (DDoF) for whole object space can present the number of planes included sampled space considered its DoF. Each DoF plane has to consider the occlusion among the object's areas in its region to show the occluded phenomenon inducing by the visual axis around the eye field of view. It makes natural scene in recognition process even though the combined discontinuous DoF regions are altered to the continuous object space. Thus DDoF pull out the advantages such as saving consuming time of the calculation process making the hologram and the reconstruction. This approach deals mainly the properties of several factors required in stereo hologram HMD such as stereoscopic DoF according to the convergence, least number of DDoFs planes in normal visual circumstance (within to 10,000mm), the efficiency of saving time for taking whole holographic process under the our method compared to the existing. Consequently this approach would be applied directly to the stereo-hologram HMD field to embody a real-time holographic imaging.

  15. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  16. Terminator Disparity Contributes to Stereo Matching for Eye Movements and Perception

    PubMed Central

    Optican, Lance M.; Cumming, Bruce G.

    2013-01-01

    In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity. PMID:24285893

  17. Terminator disparity contributes to stereo matching for eye movements and perception.

    PubMed

    Quaia, Christian; Optican, Lance M; Cumming, Bruce G

    2013-11-27

    In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity.

  18. Validation of "AW3D" Global Dsm Generated from Alos Prism

    NASA Astrophysics Data System (ADS)

    Takaku, Junichi; Tadono, Takeo; Tsutsui, Ken; Ichikawa, Mayumi

    2016-06-01

    Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried by Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. It has an exclusive ability to perform a triplet stereo observation which views forward, nadir, and backward along the satellite track in 2.5 m ground resolution, and collected its derived images all over the world during the mission life of the satellite from 2006 through 2011. A new project, which generates global elevation datasets with the image archives, was started in 2014. The data is processed in unprecedented 5 m grid spacing utilizing the original triplet stereo images in 2.5 m resolution. As the number of processed data is growing steadily so that the global land areas are almost covered, a trend of global data qualities became apparent. This paper reports on up-to-date results of the validations for the accuracy of data products as well as the status of data coverage in global areas. The accuracies and error characteristics of datasets are analyzed by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data, as well as ground control points (GCPs) and the reference Digital Elevation Model (DEM) derived from the airborne Light Detection and Ranging (LiDAR).

  19. Clinical study of quantitative diagnosis of early cervical cancer based on the classification of acetowhitening kinetics

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.

    2010-03-01

    A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.

  20. A software system for evaluation and training of spatial reasoning and neuroanatomical knowledge in a virtual environment.

    PubMed

    Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-04-01

    This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Refraction-compensated motion tracking of unrestrained small animals in positron emission tomography.

    PubMed

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-08-01

    Motion-compensated radiotracer imaging of fully conscious rodents represents an important paradigm shift for preclinical investigations. In such studies, if motion tracking is performed through a transparent enclosure containing the awake animal, light refraction at the interface will introduce errors in stereo pose estimation. We have performed a thorough investigation of how this impacts the accuracy of pose estimates and the resulting motion correction, and developed an efficient method to predict and correct for refraction-based error. The refraction model underlying this study was validated using a state-of-the-art motion tracking system. Refraction-based error was shown to be dependent on tracking marker size, working distance, and interface thickness and tilt. Correcting for refraction error improved the spatial resolution and quantitative accuracy of motion-corrected positron emission tomography images. Since the methods are general, they may also be useful in other contexts where data are corrupted by refraction effects. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  2. Stereo matching using census cost over cross window and segmentation-based disparity refinement

    NASA Astrophysics Data System (ADS)

    Li, Qingwu; Ni, Jinyan; Ma, Yunpeng; Xu, Jinxin

    2018-03-01

    Stereo matching is a vital requirement for many applications, such as three-dimensional (3-D) reconstruction, robot navigation, object detection, and industrial measurement. To improve the practicability of stereo matching, a method using census cost over cross window and segmentation-based disparity refinement is proposed. First, a cross window is obtained using distance difference and intensity similarity in binocular images. Census cost over the cross window and color cost are combined as the matching cost, which is aggregated by the guided filter. Then, winner-takes-all strategy is used to calculate the initial disparities. Second, a graph-based segmentation method is combined with color and edge information to achieve moderate under-segmentation. The segmented regions are classified into reliable regions and unreliable regions by consistency checking. Finally, the two regions are optimized by plane fitting and propagation, respectively, to match the ambiguous pixels. The experimental results are on Middlebury Stereo Datasets, which show that the proposed method has good performance in occluded and discontinuous regions, and it obtains smoother disparity maps with a lower average matching error rate compared with other algorithms.

  3. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  4. Precise Head Tracking in Hearing Applications

    NASA Astrophysics Data System (ADS)

    Helle, A. M.; Pilinski, J.; Luhmann, T.

    2015-05-01

    The paper gives an overview about two research projects, both dealing with optical head tracking in hearing applications. As part of the project "Development of a real-time low-cost tracking system for medical and audiological problems (ELCoT)" a cost-effective single camera 3D tracking system has been developed which enables the detection of arm and head movements of human patients. Amongst others, the measuring system is designed for a new hearing test (based on the "Mainzer Kindertisch"), which analyzes the directional hearing capabilities of children in cooperation with the research project ERKI (Evaluation of acoustic sound source localization for children). As part of the research project framework "Hearing in everyday life (HALLO)" a stereo tracking system is being used for analyzing the head movement of human patients during complex acoustic events. Together with the consideration of biosignals like skin conductance the speech comprehension and listening effort of persons with reduced hearing ability, especially in situations with background noise, is evaluated. For both projects the system design, accuracy aspects and results of practical tests are discussed.

  5. Lunar geodesy and cartography: a new era

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Smith, David; Robinson, Mark; Zuber, Maria T.; Neumann, Gregory; Danton, Jacob; Oberst, Juergen; Archinal, Brent; Glaeser, Philipp

    The Lunar Reconnaissance Orbiter (LRO) ushers in a new era in precision lunar geodesy and cartography. LRO was launched in June, 2009, completed its Commissioning Phase in Septem-ber 2009 and is now in its Primary Mission Phase on its way to collecting high precision, global topographic and imaging data. Aboard LRO are the Lunar Orbiter Laser Altimeter (LOLA -Smith, et al., 2009) and the Lunar Reconnaissance Orbiter Camera (LROC -Robinson, et al., ). LOLA is a derivative of the successful MOLA at Mars that produced the global reference surface being used for all precision cartographic products. LOLA produces 5 altimetry spots having footprints of 5 m at a frequency of 28 Hz, significantly bettering MOLA that produced 1 spot having a footprint of 150 m at a frequency of 10 Hz. LROC has twin narrow angle cameras having pixel resolutions of 0.5 meters from a 50 km orbit and a wide-angle camera having a pixel resolution of 75 m and in up to 7 color bands. One of the two NACs looks to the right of nadir and the other looks to the left with a few hundred pixel overlap in the nadir direction. LOLA is mounted on the LRO spacecraft to look nadir, in the overlap region of the NACs. The LRO spacecraft has the ability to look nadir and build up global coverage as well as looking off-nadir to provide stereo coverage and fill in data gaps. The LROC wide-angle camera builds up global stereo coverage naturally from its large field-of-view overlap from orbit to orbit during nadir viewing. To date, the LROC WAC has already produced global stereo coverage of the lunar surface. This report focuses on the registration of LOLA altimetry to the LROC NAC images. LOLA has a dynamic range of tens of km while producing elevation data at sub-meter precision. LOLA also has good return in off-nadir attitudes. Over the LRO mission, multiple LOLA tracks will be in each of the NAC images at the lunar equator and even more tracks in the NAC images nearer the poles. The registration of LOLA altimetry to NAC images is aided by the 5 spots showing regional and local slopes, along and cross-track, that are easily correlated visually to features within the images. Once can precisely register each of the 5 LOLA spots to specific pixels in LROC images of distinct features such as craters and boulders. This can be performed routinely for features at the 100 m level and larger. However, even features at the several m level can also be registered if a single LOLA spots probes the depth of a small crater while the other 4 spots are on the surrounding surface or one spot returns from the top of a small boulder seen by NAC. The automatic registration of LOLA tracks with NAC stereo digital terrain models should provide for even higher accuracy. Also the LOLA pulse spread of the returned signal, which is sensitive to slopes and roughness, is an additional source of information to help match the LOLA tracks to the images As the global coverage builds, LOLA will provide absolute coordinates in latitude, longitude and radius of surface features with accuracy at the meter level or better. The NAC images will then be reg-istered to the LOLA reference surface in the production of precision, controlled photomosaics, having spatial resolutions as good as 0.5 m/pixel. For hundreds of strategic sites viewed in stereo, even higher precision and more complete surface coverage is possible for the produc-tion of digital terrain models and mosaics. LRO, with LOLA and LROC, will improve the relative and absolute accuracy of geodesy and cartography by orders of magnitude, ushering in a new era for lunar geodesy and cartography. Robinson, M., et al., Space Sci. Rev., DOI 10.1007/s11214-010-9634-2, Date: 2010-02-23, in press. Smith, D., et al., Space Sci. Rev., DOI 10.1007/s11214-009-9512-y, published online 16 May 2009.

  6. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  7. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  8. A large-aperture low-cost hydrophone array for tracking whales from small boats.

    PubMed

    Miller, B; Dawson, S

    2009-11-01

    A passive sonar array designed for tracking diving sperm whales in three dimensions from a single small vessel is presented, and the advantages and limitations of operating this array from a 6 m boat are described. The system consists of four free floating buoys, each with a hydrophone, built-in recorder, and global positioning system receiver (GPS), and one vertical stereo hydrophone array deployed from the boat. Array recordings are post-processed onshore to obtain diving profiles of vocalizing sperm whales. Recordings are synchronized using a GPS timing pulse recorded onto each track. Sensitivity analysis based on hyperbolic localization methods is used to obtain probability distributions for the whale's three-dimensional location for vocalizations received by at least four hydrophones. These localizations are compared to those obtained via isodiachronic sequential bound estimation. Results from deployment of the system around a sperm whale in the Kaikoura Canyon in New Zealand are shown.

  9. Making Tracks on Mars (left-eye)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Mars Exploration Rover Spirit has been making tracks on Mars for seven months now, well beyond its original 90-day mission. The rover traveled more than 3 kilometers (2 miles) to reach the 'Columbia Hills' pictured here. In this 360-degree view of the rolling martian terrain, its wheel tracks can be seen approaching from the northwest (right side of image).

    Spirit's navigation camera took the images that make up this mosaic on sols 210 and 213 (Aug. 5 and Aug. 8, 2004). The rover is now conducting scientific studies of the local geology on the 'Clovis' outcrop of the 'West Spur' region of the 'Columbia Hills.' The view is presented in a cylindrical-perspective projection with geometrical seam correction. This is the left-eye view of a stereo pair. Scientists plan for Spirit to take a color panoramic image from this location.

  10. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  11. The MVACS Robotic Arm Camera

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  12. Deep convolutional neural network processing of aerial stereo imagery to monitor vulnerable zones near power lines

    NASA Astrophysics Data System (ADS)

    Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed

    2018-01-01

    The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.

  13. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  14. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  15. Evaluation of a video-based head motion tracking system for dedicated brain PET

    NASA Astrophysics Data System (ADS)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  16. Co-registration of Laser Altimeter Tracks with Digital Terrain Models and Applications in Planetary Science

    NASA Technical Reports Server (NTRS)

    Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.

    2013-01-01

    We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.

  17. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    PubMed

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  18. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  19. Parallax scanning methods for stereoscopic three-dimensional imaging

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.; Mayhew, Craig M.

    2012-03-01

    Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.

  20. Apollo 12 stereo view of lunar surface upon which astronaut had stepped

    NASA Image and Video Library

    1969-11-20

    AS12-57-8448 (19-20 Nov. 1969) --- An Apollo 12 stereo view showing a three-inch square of the lunar surface upon which an astronaut had stepped. Taken during extravehicular activity of astronauts Charles Conrad Jr. and Alan L. Bean, the exposure of the boot imprint was made with an Apollo 35mm stereo close-up camera. The camera was developed to get the highest possible resolution of a small area. The three-inch square is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. While astronauts Conrad and Bean descended in their Apollo 12 Lunar Module to explore the lunar surface, astronaut Richard F. Gordon Jr. remained with the Command and Service Modules in lunar orbit.

  1. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  2. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  3. 3D structure and kinematics characteristics of EUV wave front

    NASA Astrophysics Data System (ADS)

    Podladchikova, T.; Veronig, A.; Dissauer, K.

    2017-12-01

    We present 3D reconstructions of EUV wave fronts using multi-point observations from the STEREO-A and STEREO-B spacecraft. EUV waves are large-scale disturbances in the solar corona that are initiated by coronal mass ejections, and are thought to be large-amplitude fast-mode MHD waves or shocks. The aim of our study is to investigate the dynamic evolution of the 3D structure and wave kinematics of EUV wave fronts. We study the events on December 7, 2007 and February 13, 2009 using data from the STEREO/EUVI-A and EUVI-B instruments in the 195 Å filter. The proposed approach is based on a complementary combination of epipolar geometry of stereo vision and perturbation profiles. We propose two different solutions to the matching problem of the wave crest on images from the two spacecraft. One solution is suitable for the early and maximum stage of event development when STEREO-A and STEREO-B see the different facets of the wave, and the wave crest is clearly outlined. The second one is applicable also at the later stage of event development when the wave front becomes diffuse and is faintly visible. This approach allows us to identify automatically the segments of the diffuse front on pairs of STEREO-A and STEREO-B images and to solve the problem of identification and matching of the objects. We find that the EUV wave observed on December 7, 2007 starts with a height of 30-50 Mm, sharply increases to a height of 100-120 Mm about 10 min later, and decreases to 10-20 Mm in the decay phase. Including the 3D evolution of the EUV wave front allowed us to correct the wave kinematics for projection and changing height effects. The velocity of the wave crest (V=215-266 km/s) is larger than the trailing part of the wave pulse (V=103-163 km/s). For the February 9, 2009 event, the upward movement of the wave crest shows an increase from 20 to 100 Mm over a period of 30 min. The velocity of wave crest reaches values of 208-211 km/s.

  4. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    NASA Astrophysics Data System (ADS)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity map for the stereo pair and each correspondence is transformed back to the owner and 3D points are derived through photogrammetric space intersection. Experimental results reveal that the proposed method is able to reduce gaps and inconsistencies caused by the inaccurate boresight offsets between the two NAC cameras and the irregular overlapping regions, and finally generate precise and consistent 3D surface models from the NAC stereo images automatically.

  5. Target Tracking Based Scene Analysis

    DTIC Science & Technology

    1984-08-01

    1082 , pp 377-391. [21 S.T. Barnard and M.A. Fisch~ler, "Computational Stereo", Computing Surveys 14, 1082 , pp 553-572. 131 K.H. Bers, M. Bohner, and P...Braunlage/Harz. FRG, June 21 - July 2, 1082 Springer, Berlin, 1083. pp 10.1-124. [81 R.B. Cate, T.*1B. Dennis, J.T. Mallin, K.S. Nedelman, NEIL Trenchard, and...Institute on Pictorial Data Analysis, Bonas, France, August 1-12, 1082 ), Springer, Berlin, 1983. [181 G.R. Legters Jr. and T.Y. Young, "A Mathematical

  6. Sojourner Rover Near The Dice

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lander image of rover near The Dice (three small rocks behind the rover) and Yogi on sol 22. Color (red, green, and blue filters at 6:1 compression) image shows dark rocks, bright red dust, dark red soil exposed in rover tracks, and dark (black) soil. The APXS is in view at the rear of the vehicle, and the forward stereo cameras and laser light stripers are in shadow just below the front edge of the solar panel.

    NOTE: original caption as published in Science Magazine

  7. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  8. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  9. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  10. Quantitative evaluation of three advanced laparoscopic viewing technologies: a stereo endoscope, an image projection display, and a TFT display.

    PubMed

    Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A

    2002-08-01

    Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.

  11. Generation of High Resolution Global DSM from ALOS PRISM

    NASA Astrophysics Data System (ADS)

    Takaku, J.; Tadono, T.; Tsutsui, K.

    2014-04-01

    Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried on the Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. The sensor consists of three independent panchromatic radiometers for viewing forward, nadir, and backward in 2.5 m ground resolution producing a triplet stereoscopic image along its track. The sensor had observed huge amount of stereo images all over the world during the mission life of the satellite from 2006 through 2011. We have semi-automatically processed Digital Surface Model (DSM) data with the image archives in some limited areas. The height accuracy of the dataset was estimated at less than 5 m (rms) from the evaluation with ground control points (GCPs) or reference DSMs derived from the Light Detection and Ranging (LiDAR). Then, we decided to process the global DSM datasets from all available archives of PRISM stereo images by the end of March 2016. This paper briefly reports on the latest processing algorithms for the global DSM datasets as well as their preliminary results on some test sites. The accuracies and error characteristics of datasets are analyzed and discussed on various fields by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data and Shuttle Radar Topography Mission (SRTM) data, as well as the GCPs and the reference airborne LiDAR/DSM.

  12. Slant Perception Under Stereomicroscopy.

    PubMed

    Horvath, Samantha; Macdonald, Kori; Galeotti, John; Klatzky, Roberta L

    2017-11-01

    Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.

  13. A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner

    NASA Astrophysics Data System (ADS)

    Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao

    2017-09-01

    There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.

  14. Automatic Large-Scalae 3d Building Shape Refinement Using Conditional Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.

    2018-05-01

    Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.

  15. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    PubMed Central

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-01-01

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669

  16. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.

    PubMed

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-12-15

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  17. Video-CRM: understanding customer behaviors in stores

    NASA Astrophysics Data System (ADS)

    Haritaoglu, Ismail; Flickner, Myron; Beymer, David

    2013-03-01

    This paper describes two real-time computer vision systems created 10 years ago that detect and track people in stores to obtain insights of customer behavior while shopping. The first system uses a single color camera to identify shopping groups in the checkout line. Shopping groups are identified by analyzing the inter-body distances coupled with the cashier's activities to detect checkout transactions start and end times. The second system uses multiple overhead narrow-baseline stereo cameras to detect and track people, their body posture and parts to understand customer interactions with products such as "customer picking a product from a shelf". In pilot studies both systems demonstrated real-time performance and sufficient accuracy to enable more detailed understanding of customer behavior and extract actionable real-time retail analytics.

  18. The KLOE-2 Inner Tracker: Detector commissioning and operation

    NASA Astrophysics Data System (ADS)

    Balla, A.; Bencivenni, G.; Branchini, P.; Ciambrone, P.; Czerwinski, E.; De Lucia, E.; Cicco, A.; Di Domenici, D.; Felici, G.; Morello, G.

    2017-02-01

    The KLOE-2 experiment started its data taking campaign in November 2014 with an upgraded tracking system including an Inner Tracker built with the cylindrical GEM technology, to operate together with the Drift Chamber improving the apparatus tracking performance. The Inner Tracker is composed of four cylindrical triple-GEM, each provided with an X-V strips-pads stereo readout and equipped with the GASTONE ASIC developed inside the KLOE-2 collaboration. Although GEM detectors are already used in high energy physics experiment, this device is considered a frontier detector due to its cylindrical geometry: KLOE-2 is the first experiment to use this novel solution. The results of the detector commissioning, detection efficiency evaluation, calibration studies and alignment, both with dedicated cosmic-ray muon and Bhabha scattering events, will be reported.

  19. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  20. Patient positioning in radiotherapy based on surface imaging using time of flight cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilles, M., E-mail: marlene.gilles@univ-brest.fr

    2016-08-15

    Purpose: To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. Methods: A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, andmore » head and neck cancer patients. Results: The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. Conclusions: The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).« less

  1. Patient positioning in radiotherapy based on surface imaging using time of flight cameras.

    PubMed

    Gilles, M; Fayad, H; Miglierini, P; Clement, J F; Scheib, S; Cozzi, L; Bert, J; Boussion, N; Schick, U; Pradier, O; Visvikis, D

    2016-08-01

    To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).

  2. Three-dimensional tracking for efficient fire fighting in complex situations

    NASA Astrophysics Data System (ADS)

    Akhloufi, Moulay; Rossi, Lucile

    2009-05-01

    Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.

  3. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  4. A Phased Array of Widely Separated Antennas for Space Communication and Planetary Radar

    NASA Astrophysics Data System (ADS)

    Geldzahler, B.; Bershad, C.; Brown, R.; Cox, R.; Hoblitzell, R.; Kiriazes, J.; Ledford, B.; Miller, M.; Woods, G.; Cornish, T.; D'Addario, L.; Davarian, F.; Lee, D.; Morabito, D.; Tsao, P.; Soloff, J.; Church, K.; Deffenbaugh, P.; Abernethy, K.; Anderson, W.; Collier, J.; Wellen, G.

    NASA has successfully demonstrated coherent uplink arraying with real time compensation for atmospheric phase fluctuations at 7.145-7.190 GHz (X-band) and is pursuing a similar demonstration 30-31 GHz (Ka-band) using three 12m diameter COTS antennas separated by 60m at the Kennedy Space Center in Florida. In addition, we have done the same demonstration with up to three 34m antennas separated by 250m at the Goldstone Deep Space Communication Complex in California at X-band 7.1 GHz. We have begun to infuse the capability at Goldstone into the Deep Space Network to provide a quasi-operational system. Such a demonstration can enable NASA to design and establish a high power (10 PW) high resolution (<10 cm), 24/7 availability radar system for (a) tracking and characterizing observations of Near Earth Objects (NEOs), (b) tracking, characterizing and determining the statistics of small-scale (≤10cm) orbital debris, (c) incorporating the capability into its space communication and navigation tracking stations for emergency spacecraft commanding in the Ka band era which NASA is entering, and (d) fielding capabilities of interest to other US government agencies. We present herein the results of our phased array uplink combining at near 7.17 and 8.3 GHz using widely separated antennas demonstrations, our moderately successful attempts to rescue the STEREO-B spacecraft (distance 2 astronomical units (185,000,000 miles), the first two attempts at imaging and ranging of near Earth asteroids, and progress in developing telescopes that are fully capable at radio and optical frequencies. And progress toward the implementation of our vision for going forward in implementing a high performance, low lifecycle cost multi-element radar array.

  5. Concealed object segmentation and three-dimensional localization with passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon

    2013-05-01

    Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.

  6. An Approach to 3d Digital Modeling of Surfaces with Poor Texture by Range Imaging Techniques. `SHAPE from Stereo' VS. `SHAPE from Silhouette' in Digitizing Jorge Oteiza's Sculptures

    NASA Astrophysics Data System (ADS)

    García Fernández, J.; Álvaro Tordesillas, A.; Barba, S.

    2015-02-01

    Despite eminent development of digital range imaging techniques, difficulties persist in the virtualization of objects with poor radiometric information, in other words, objects consisting of homogeneous colours (totally white, black, etc.), repetitive patterns, translucence, or materials with specular reflection. This is the case for much of the Jorge Oteiza's works, particularly in the sculpture collection of the Museo Fundación Jorge Oteiza (Navarra, Spain). The present study intend to analyse and asses the performance of two digital 3D-modeling methods based on imaging techniques, facing cultural heritage in singular cases, determined by radiometric characteristics as mentioned: Shape from Silhouette and Shape from Stereo. On the other hand, the text proposes the definition of a documentation workflow and presents the results of its application in the collection of sculptures created by Oteiza.

  7. On improving IED object detection by exploiting scene geometry using stereo processing

    NASA Astrophysics Data System (ADS)

    van de Wouw, Dennis W. J. M.; Dubbelman, Gijs; de With, Peter H. N.

    2015-03-01

    Detecting changes in the environment with respect to an earlier data acquisition is important for several applications, such as finding Improvised Explosive Devices (IEDs). We explore and evaluate the benefit of depth sensing in the context of automatic change detection, where an existing monocular system is extended with a second camera in a fixed stereo setup. We then propose an alternative frame registration that exploits scene geometry, in particular the ground plane. Furthermore, change characterization is applied to localized depth maps to distinguish between 3D physical changes and shadows, which solves one of the main challenges of a monocular system. The proposed system is evaluated on real-world acquisitions, containing geo-tagged test objects of 18 18 9 cm up to a distance of 60 meters. The proposed extensions lead to a significant reduction of the false-alarm rate by a factor of 3, while simultaneously improving the detection score with 5%.

  8. Mass Balance of the Northern Antarctic Peninsula and its Ongoing Response to Ice Shelf Loss

    NASA Astrophysics Data System (ADS)

    Scambos, T. A.; Berthier, E.; Haran, T. M.; Shuman, C. A.; Cook, A. J.; Bohlander, J. A.

    2012-12-01

    An assessment of the most rapidly changing areas of the Antarctic Peninsula (north of 66°S) shows that ice mass loss for the region is dominated by areas affected by eastern-Peninsula ice shelf losses in the past 20 years. Little if any of the mass loss is compensated by increased snowfall in the northwestern or far northern areas. We combined satellite stereo-image DEM differencing and ICESat-derived along-track elevation changes to measure ice mass loss for the Antarctic Peninsula north of 66°S between 2001-2010, focusing on the ICESat-1 period of operation (2003-2009). This mapping includes all ice drainages affected by recent ice shelf loss in the northeastern Peninsula (Prince Gustav, Larsen Inlet, Larsen A, and Larsen B) as well as James Ross Island, Vega Island, Anvers Island, Brabant Island and the adjacent west-flowing glaciers. Polaris Glacier (feeding the Larsen Inlet, which collapsed in 1986) is an exception, and may have stabilized. Our method uses ASTER and SPOT-5 stereo-image DEMs to determine dh/dt for elevations below 800 m; at higher elevations ICESat along-track elevation differencing is used. To adjust along-track path offsets between its 2003-2009 campaigns, we use a recent DEM of the Peninsula to establish and correct for cross-track slope (Cook et al., 2012, doi:10.5194/essdd-5-365-2012; http://nsidc.org/data/nsidc-0516.html) . We reduce the effect of possible seasonal variations in elevation by using only integer-year repeats of the ICESat tracks for comparison. Mass losses are dominated by the major glaciers that had flowed into the Prince Gustav (Boydell, Sjorgren, Röhss), Larsen A (Edgeworth, Bombardier, Dinsmoor, Drygalski), and Larsen B (Hektoria, Jorum, and Crane) embayments. The pattern of mass loss emphasizes the significant and multi-decadal response to ice shelf loss. Areas with shelf losses occurring 30 to 100s of years ago seem to be relatively stable or losing mass only slowly (western glaciers, northernmost areas). The remnant of the Larsen B, Scar Inlet Ice Shelf, shows signs of imminent break-up, and its feeder glaciers (Flask and Leppard) are already increasing in speed as the ice shelf remnant decreases in area.

  9. Pulse X-ray device for stereo imaging and few-projection tomography of explosive and fast processes

    NASA Astrophysics Data System (ADS)

    Palchikov, E. I.; Dolgikh, A. V.; Klypin, V. V.; Krasnikov, I. Y.; Ryabchun, A. M.

    2017-10-01

    This paper describes the operation principles and design features of the device for single pulse X-raying of explosive and high-speed processes, developed on the basis of a Tesla transformer with lumped secondary capacitor bank. The circuit with the lumped capacitor bank allows transferring a greater amount of energy to the discharge circuit as compared with the Marks-surge generator for more effective operation with remote X-ray tubes connected by coaxial cables. The device equipped with multiple X-ray tubes provides simultaneous X-raying of extended or spaced objects, stereo imaging, or few-projection tomography.

  10. 3D Observations techniques for the solar corona

    NASA Astrophysics Data System (ADS)

    Portier-Fozzani, F.; Papadopoulo, T.; Fermin, I.; Bijaoui, A.; Stereo/Secchi 3D Team; et al.

    In this talk, we will present a review of the different 3D techniques concerning observations of the solar corona made by EUV imageur (such as SOHO/EIT and STEREO/SECCHI) and by coronagraphs (SOHO/LASCO and STEREO/SECCHI). Tomographic reconstructions need magnetic extrapolation to constraint the model (classical triangle mash reconstruction, or more evoluated pixon method). For 3D reconstruction the other approach is stereovision. Stereoscopic techniques are built in a specific way to take into account the optical thin medium of the solar corona, which makes most of the classical stereo method not directly applicable. To improve such method we need to take into account how to describe an image by computer vision : an image is not only a set of intensities but its descriptions/representations in term of sub-objects is needed for the structures extractions and matching. We will describe optical flow methods to follow the structures, and decomposition in sub-areas depending of the solar cycle. After recalling results obtained with geometric loops reconstructions and their consequences for twist measurement and helicity evaluation, we will describe how we can mix pixel and conceptual recontruction for stereovision. We could then include epipolar geometry and Multiscale Vision Model (MVM) to enhance the reconstruction. These concepts are under development for STEREO/SECCHI.

  11. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  12. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    PubMed

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  13. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  14. Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Baek, Sangwook; Lee, Chulhee

    2015-03-01

    In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.

  15. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  16. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  17. VERDEX: A virtual environment demonstrator for remote driving applications

    NASA Technical Reports Server (NTRS)

    Stone, Robert J.

    1991-01-01

    One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.

  18. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  19. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  20. Study of high-definition and stereoscopic head-aimed vision for improved teleoperation of an unmanned ground vehicle

    NASA Astrophysics Data System (ADS)

    Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian

    2012-06-01

    Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.

  1. System of Programmed Modules for Measuring Photographs with a Gamma-Telescope

    NASA Technical Reports Server (NTRS)

    Averin, S. A.; Veselova, G. V.; Navasardyan, G. V.

    1978-01-01

    Physical experiments using tracking cameras resulted in hundreds of thousands of stereo photographs of events being received. To process such a large volume of information, automatic and semiautomatic measuring systems are required. At the Institute of Space Research of the Academy of Science of the USSR, a system for processing film information from the spark gamma-telescope was developed. The system is based on a BPS-75 projector in line with the minicomputer Elektronika 1001. The report describes this system. The various computer programs available to the operators are discussed.

  2. Gravity Anomalies

    NASA Image and Video Library

    2015-04-15

    Analysis of radio tracking data have enabled maps of the gravity field of Mercury to be derived. In this image, overlain on a mosaic obtained by MESSENGER's Mercury Dual Imaging System and illuminated with a shape model determined from stereo-photoclinometry, Mercury's gravity anomalies are depicted in colors. Red tones indicate mass concentrations, centered on the Caloris basin (center) and the Sobkou region (right limb). Such large-scale gravitational anomalies are signatures of subsurface structure and evolution. The north pole is near the top of the sunlit area in this view. http://photojournal.jpl.nasa.gov/catalog/PIA19285

  3. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  4. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.

  5. Multispectral Resource Sampler - An experimental satellite sensor for the mid-1980s

    NASA Technical Reports Server (NTRS)

    Schnetzler, C. C.; Thompson, L. L.

    1979-01-01

    An experimental pushbroom scan sensor, the Multispectral Resource Sampler (MRS), being developed by NASA for a future earth orbiting flight is presented. This sensor will provide new earth survey capabilities beyond those of current sensor systems, with a ground resolution of 15 m over a swath width of 15 km in four bands. The four arrays are aligned on a common focal surface requiring no beamsplitters, thus causing a spatial separation on the ground which requires computer processing to register the bands. Along track pointing permits stereo coverage at variable base/height ratios and atmospheric correction experiments, while across track pointing will provide repeat coverage, from a Landsat-type orbit, of every 1 to 3 days. The MRS can be used for experiments in crop discrimination and status, rock discrimination, land use classification, and forestry.

  6. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  7. Dynamic edge warping - An experimental system for recovering disparity maps in weakly constrained systems

    NASA Technical Reports Server (NTRS)

    Boyer, K. L.; Wuescher, D. M.; Sarkar, S.

    1991-01-01

    Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.

  8. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  9. SU-E-J-184: Stereo Time-Of-Flight System for Patient Positioning in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wentz, T; Gilles, M; Visvikis, D

    2014-06-01

    Purpose: The objective of this work is to test the advantage of using the surface acquired by two stereo Time-of-Flight (ToF) cameras in comparison of the use of one camera only for patient positioning in radiotherapy. Methods: A first step consisted on validating the use of a stereo ToFcamera system for positioning management of a phantom mounted on a linear actuator producing very accurate and repeatable displacements. The displacements between two positions were computed from the surface point cloud acquired by either one or two cameras thanks to an iterative closest point algorithm. A second step consisted on determining themore » displacements on patient datasets, with two cameras fixed on the ceiling of the radiotherapy room. Measurements were done first on voluntary subject with fixed translations, then on patients during the normal clinical radiotherapy routine. Results: The phantom tests showed a major improvement in lateral and depth axis for motions above 10 mm when using the stereo-system instead of a unique camera (Fig1). Patient measurements validate these results with a mean real and measured displacement differences in the depth direction of 1.5 mm when using one camera and 0.9 mm when using two cameras (Fig2). In the lateral direction, a mean difference of 1 mm was obtained by the stereo-system instead of 3.2 mm. Along the longitudinal axis mean differences of 5.4 and 3.4 mm with one and two cameras respectively were noticed but these measurements were still inaccurate and globally underestimated in this direction as in the literature. Similar results were also found for patient subjects with a mean difference reduction of 35%, 7%, and 25% for the lateral, depth, and longitudinal displacement with the stereo-system. Conclusion: The addition of a second ToF-camera to determine patient displacement strongly improved patient repositioning results and therefore insures better radiation delivery.« less

  10. Autonomous Rover Traverse and Precise Arm Placement on Remotely Designated Targets

    NASA Technical Reports Server (NTRS)

    Nesnas, Issa A.; Pivtoraiko, Mihail N.; Kelly, Alonzo; Fleder, Michael

    2012-01-01

    This software controls a rover platform to traverse rocky terrain autonomously, plan paths, and avoid obstacles using its stereo hazard and navigation cameras. It does so while continuously tracking a target of interest selected from 10 20 m away. The rover drives and tracks the target until it reaches the vicinity of the target. The rover then positions itself to approach the target, deploys its robotic arm, and places the end effector instrument on the designated target to within 2-3-cm accuracy of the originally selected target. This software features continuous navigation in a fairly rocky field in an outdoor environment and the ability to enable the rover to avoid large rocks and traverse over smaller ones. Using point-and-click mouse commands, a scientist designates targets in the initial imagery acquired from the rover s mast cameras. The navigation software uses stereo imaging, traversability analysis, path planning, trajectory generation, and trajectory execution. It also includes visual target tracking of a designated target selected from 10 m away while continuously navigating the rocky terrain. Improvements in this design include steering while driving, which uses continuous curvature paths. There are also several improvements to the traversability analyzer, including improved data fusion of traversability maps that result from pose estimation uncertainties, dealing with boundary effects to enable tighter maneuvers, and handling a wider range of obstacles. This work advances what has been previously developed and integrated on the Mars Exploration Rovers by using algorithms that are capable of traversing more rock-dense terrains, enabling tight, thread-the-needle maneuvers. These algorithms were integrated on the newly refurbished Athena Mars research rover, and were fielded in the JPL Mars Yard. Forty-three runs were conducted with targets at distances ranging from 5 to 15 m, and a success rate of 93% was achieved for placement of the instrument within 2-3 cm of the target.

  11. Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury

    NASA Astrophysics Data System (ADS)

    Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele

    2016-07-01

    The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.

  12. AATSR Based Volcanic Ash Plume Top Height Estimation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Sundstrom, Anu-Maija; Rodriguez, Edith; de Leeuw, Gerrit

    2015-11-01

    The AATSR Correlation Method (ACM) height estimation algorithm is presented. The algorithm uses Advanced Along Track Scanning Radiometer (AATSR) satellite data to detect volcanic ash plumes and to estimate the plume top height. The height estimate is based on the stereo-viewing capability of the AATSR instrument, which allows to determine the parallax between the satellite's nadir and 55◦ forward views, and thus the corresponding height. AATSR provides an advantage compared to other stereo-view satellite instruments: with AATSR it is possible to detect ash plumes using brightness temperature difference between thermal infrared (TIR) channels centered at 11 and 12 μm. The automatic ash detection makes the algorithm efficient in processing large quantities of data: the height estimate is calculated only for the ash-flagged pixels. Besides ash plumes, the algorithm can be applied to any elevated feature with sufficient contrast to the background, such as smoke and dust plumes and clouds. The ACM algorithm can be applied to the Sea and Land Surface Temperature Radiometer (SLSTR), scheduled for launch at the end of 2015.

  13. Glacier Volume Change Estimation Using Time Series of Improved Aster Dems

    NASA Astrophysics Data System (ADS)

    Girod, Luc; Nuth, Christopher; Kääb, Andreas

    2016-06-01

    Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be easily modeled analytically from the first one. We thus remove the remaining along-track jitter effects in the DEMs statistically through temporal DEM stacks to finally compute the glacier volume changes over time. Our method yields cleaner and spatially more complete elevation data, which also proved to be more in accordance to reference DEMs, compared to NASA's AST14DMO DEM standard products. The quality of the demonstrated measurements promises to further unlock the underused potential of ASTER DEMs for glacier volume change time series on a global scale. The data produced by our method will help to better understand the response of glaciers to climate change and their influence on runoff and sea level.

  14. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    PubMed

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  15. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  16. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  17. Integrated system for point cloud reconstruction and simulated brain shift validation using tracked surgical microscope

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2017-03-01

    Intra-operative soft tissue deformation, referred to as brain shift, compromises the application of current imageguided surgery (IGS) navigation systems in neurosurgery. A computational model driven by sparse data has been used as a cost effective method to compensate for cortical surface and volumetric displacements. Stereoscopic microscopes and laser range scanners (LRS) are the two most investigated sparse intra-operative imaging modalities for driving these systems. However, integrating these devices in the clinical workflow to facilitate development and evaluation requires developing systems that easily permit data acquisition and processing. In this work we present a mock environment developed to acquire stereo images from a tracked operating microscope and to reconstruct 3D point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space in order to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. Our experimental results report approximately 2mm average displacement error compared with the optical tracking system. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to LRS to collect sufficient intraoperative information for brain shift correction.

  18. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  19. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  20. Past and Future SOHO-Ulysses Quadratures

    NASA Technical Reports Server (NTRS)

    Suess, Steven; Poletto, G.

    2006-01-01

    With the launch of SOHO, it again became possible to carry out quadrature observations. In comparison with earlier observations, the new capabilities of coronal spectroscopy with UVCS and in situ ionization state and composition with Ulysses/SWICS enabled new types of studies. Results from two studies serve as examples: (i) The acceleration profile of wind from small coronal holes. (ii) A high-coronal reconnecting current sheet as the source of high ionization state Fe in a CME at Ulysses. Generally quadrature observations last only for a few days, when Ulysses is within ca. 5 degrees of the limb. This means luck is required for the phenomenon of interest to lie along the radial direction to Ulysses. However, when Ulysses is at high southern latitude in winter 2007 and high northern latitude in winter 2008, there will be unusually favorable configurations for quadrature observations with SOHO and corresponding bracketing limb observations from STEREO A/B. Specifically, Ulysses will be within 5 degrees of the limb from December 2006 to May 2007 and within 10 degrees of the limb from December 2007 to May 2008. These long-lasting quadratures and bracketing STEREO A/B observations overcome the limitations inherent in the short observation intervals of typical quadratures. Furthermore, ionization and charge state measurements like those on Ulysses will also be made on STEREO and these will be essential for identification of CME ejecta - one of the prime objectives for STEREO.

  1. The effect of visual and interaction fidelity on spatial cognition in immersive virtual environments.

    PubMed

    Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew

    2006-01-01

    Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images.

  2. Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.

    PubMed

    Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y

    2006-06-01

    An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.

  3. Telerobot local-remote control architecture for space flight program applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John

    1993-01-01

    The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.

  4. 3D road marking reconstruction from street-level calibrated stereo pairs

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier

    This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.

  5. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  6. Fast 3D shape measurements with reduced motion artifacts

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Chen, Qian; Gu, Guohua

    2017-10-01

    Fringe projection is an extensively used technique for high speed three-dimensional (3D) measurements of dynamic objects. However, the motion often leads to artifacts in reconstructions due to the sequential recording of the set of patterns. In order to reduce the adverse impact of the movement, we present a novel high speed 3D scanning technique combining the fringe projection and stereo. Firstly, promising measuring speed is achieved by modifying the traditional aperiodic sinusoidal patterns so that the fringe images can be cast at kilohertz with the widely used defocusing strategy. Next, a temporal intensity tracing algorithm is developed to further alleviate the influence of motion by accurately tracing the ideal intensity for stereo matching. Then, a combined cost measure is suggested to robustly estimate the cost for each pixel. In comparison with the traditional method where the effect of motion is not considered, experimental results show that the reconstruction accuracy for dynamic objects can be improved by an order of magnitude with the proposed method.

  7. Three-dimensional online surface reconstruction of augmented fluorescence lifetime maps using photometric stereo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura

    2017-02-01

    Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.

  8. Autocorrelation techniques for soft photogrammetry

    NASA Astrophysics Data System (ADS)

    Yao, Wu

    In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.

  9. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    PubMed Central

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639

  10. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  11. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    PubMed

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  12. The Teton-Yellowstone Tornado of 21 July 1987

    NASA Technical Reports Server (NTRS)

    Fujita, T. Theodore

    1989-01-01

    The Teton-Yellowstone Tornado, rated F4, crossed the Continental Divide at 3070 m, leaving behind a damage swath 39.2-km long and 2.5-km wide. A detailed damage analysis by using stereo-pair and color photos revealed the existence of four spinup swirl marks and 72 microburst outflows inside the damage area. The tornado was spawned by a mesocyclone that formed at the intersection of a mesohigh boundary and a warm front. The parent cloud of the tornado, tracked on eight infrared-temperature maps from GOES East and West, moved at 25 m s-1 and the number of cold temperature pixels below -60 C reached a distinct peak during the tornado time. Identified and tracked also are two warm spots enclosed inside the cold anvil cloud. On the basis of their identity and movement, an attempt was made to explain the cause of these spots as being the stratospheric cirrus clouds.

  13. The FINUDA straw tube detector

    NASA Astrophysics Data System (ADS)

    Zia, A.; Benussi, L.; Bertani, M.; Bianco, S.; Fabbri, F. L.; Gianotti, P.; Giardoni, M.; Lucherini, V.; Mecozzi, A.; Pace, E.; Passamonti, L.; Qaiser, N.; Russo, V.; Tomassini, S.; Sarwar, S.; Serdyouk, V.

    2001-04-01

    An array of 2424 2.6- m-long, 15- mm-diameter mylar straw tubes, arranged in two axial and four stereo layers, has been assembled at National Laboratories of Frascati of INFN for the FINUDA experiment. The array covers a cylindrical tracking surface of 18 m 2 and provides coordinate measurement in the drift direction and along the wire with a resolution of the order of 100 and 300 μm, respectively. The array has finished the commissioning phase and tests with cosmic rays are underway. The status straw tubes array and a very preliminary result from cosmic rays test are summarized in this work.

  14. STEREO's View

    NASA Image and Video Library

    2017-12-08

    STEREO witnessed the March 5, 2013, CME from the side of the sun – Earth is far to the left of this picture. While the SOHO images show a halo CME, STEREO shows the CME clearly moving away from Earth. Credit: NASA/STEREO --- CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Using remote data collection to identify bridges and culverts susceptible to blockage during flooding events : final report.

    DOT National Transportation Integrated Search

    2016-12-14

    The objectives of this project were to pilot test the use of an unmanned aerial vehicle (UAV) to gather stereo imagery of streambeds upstream of crossing structures, and develop a process of rapidly transmitting actionable information about potential...

  17. Evolution of the Varrier autostereoscopic VR display: 2001-2007

    NASA Astrophysics Data System (ADS)

    Peterka, Tom; Kooima, Robert L.; Girado, Javier I.; Ge, Jinghua; Sandin, Daniel J.; DeFanti, Thomas A.

    2007-02-01

    Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person interactive VR experience without the need for glasses or other gear to be worn by the user. Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics card enhancements. Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier. Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable to commercially available tracking systems. Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported. Local as well as distributed computation is employed in various applications. Long-distance collaboration has been demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop forms to fit a variety of space and budget constraints. Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a static barrier.

  18. Article Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R. (Inventor)

    2004-01-01

    During the last ten years patents directed to luggage scanning apparatus began to appear in the patent art. Absent from the variety of approaches in the art is stereoscopic imaging that entails exposing two or more images of the same object, each taken from a slightly different perspective. If the perspectives are too different, that is. if there is too much separation of the X-ray exposures, the image will look flat. Yet with a slight separation, a stereo separation, interference occurs. Herein a system is provided for the production of stereo pairs. One perspective, a left or a right perspective angle, is first established. Next, the other perspective angle is computed. Using these left and right perspectives the X-ray sources can then be spaced away from each other.

  19. Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.

    2013-01-01

    In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.

  20. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity.

    PubMed

    Frost, William N; Wang, Jean; Brandon, Christopher J

    2007-05-15

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations.

  1. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  2. Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.

    PubMed

    Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D

    2017-10-01

    This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.

  3. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  4. Vision-based localization of the center of mass of large space debris via statistical shape analysis

    NASA Astrophysics Data System (ADS)

    Biondi, G.; Mauro, S.; Pastorelli, S.

    2017-08-01

    The current overpopulation of artificial objects orbiting the Earth has increased the interest of the space agencies on planning missions for de-orbiting the largest inoperative satellites. Since this kind of operations involves the capture of the debris, the accurate knowledge of the position of their center of mass is a fundamental safety requirement. As ground observations are not sufficient to reach the required accuracy level, this information should be acquired in situ just before any contact between the chaser and the target. Some estimation methods in the literature rely on the usage of stereo cameras for tracking several features of the target surface. The actual positions of these features are estimated together with the location of the center of mass by state observers. The principal drawback of these methods is related to possible sudden disappearances of one or more features from the field of view of the cameras. An alternative method based on 3D Kinematic registration is presented in this paper. The method, which does not suffer of the mentioned drawback, considers a preliminary reduction of the inaccuracies in detecting features by the usage of statistical shape analysis.

  5. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  6. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.

  8. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  9. Apollo 11 stereo view showing lump of surface powder with glassy material

    NASA Image and Video Library

    1969-07-20

    AS11-45-6704 (20 July 1969) --- An Apollo stereo view showing a close-up of a small lump of lunar surface powder about a half inch across, with a splash of a glassy material over it. It seems that a drop of molten material fell on it, splashed and froze. The exposure was made by the Apollo 11 35mm stereo close-up camera. The camera was specially developed to get the highest possible resolution of a small area. A three-inch square area is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. The pictures are in color and give a stereo view, enabling the fine detail to be seen very clearly. The project is under the direction of Professor T. Gold of Cornell University and Dr. F. Pearce of NASA. The camera was designed and built by Eastman Kodak. Professor E. Purcell of Harvard University and Dr. E. Land of the Polaroid Corporation have contributed to the project. The pictures brought back from the moon by the Apollo 11 crew are of excellent quality and allow fine detail of the undisturbed lunar surface to be seen. Scientists hope to be able to deduce from them some of the processes that have taken place that have shaped and modified the surface.

  10. Stanford automatic photogrammetry research

    NASA Technical Reports Server (NTRS)

    Quam, L. H.; Hannah, M. J.

    1974-01-01

    A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.

  11. Modeling vegetation heights from high resolution stereo aerial photography: an application for broad-scale rangeland monitoring

    USDA-ARS?s Scientific Manuscript database

    Vertical vegetation structure in rangeland ecosystems can be a valuable indicator for monitoring rangeland health or progress toward management objectives because of its importance for assessing riparian areas, post-fire recovery, wind erosion, and wildlife habitat. Federal land management agencies ...

  12. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Bastos Guedes, Karla; da Silva Tavares, Ronaldo

    2014-01-01

    The representation of figures in mongean projection (double system planned orthographic projection used in the studies of Descriptive Geometry), specially when placed in a particular situation in relation to the projection plans, possesses the quality that, through them, the actual dimensions of represented spatial objects can be found directly…

  13. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.

  14. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  15. MAGNETIC FLUX TRANSPORT AND THE LONG-TERM EVOLUTION OF SOLAR ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugarte-Urra, Ignacio; Upton, Lisa; Warren, Harry P.

    2015-12-20

    With multiple vantage points around the Sun, Solar Terrestrial Relations Observatory (STEREO) and Solar Dynamics Observatory imaging observations provide a unique opportunity to view the solar surface continuously. We use He ii 304 Å data from these observatories to isolate and track ten active regions and study their long-term evolution. We find that active regions typically follow a standard pattern of emergence over several days followed by a slower decay that is proportional in time to the peak intensity in the region. Since STEREO does not make direct observations of the magnetic field, we employ a flux-luminosity relationship to infermore » the total unsigned magnetic flux evolution. To investigate this magnetic flux decay over several rotations we use a surface flux transport model, the Advective Flux Transport model, that simulates convective flows using a time-varying velocity field and find that the model provides realistic predictions when information about the active region's magnetic field strength and distribution at peak flux is available. Finally, we illustrate how 304 Å images can be used as a proxy for magnetic flux measurements when magnetic field data is not accessible.« less

  16. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  17. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  18. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  19. Adaptive object tracking via both positive and negative models matching

    NASA Astrophysics Data System (ADS)

    Li, Shaomei; Gao, Chao; Wang, Yawen

    2015-03-01

    To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

  20. How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking

    PubMed Central

    Thomas, Laura E.; Seiffert, Adriane E.

    2011-01-01

    Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259

  1. Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations from SOHO and STEREO

    NASA Technical Reports Server (NTRS)

    Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji

    2011-01-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.

  2. Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration

    PubMed Central

    Chen, Shoubin; Liu, Jingbin; Huang, Wenchao

    2018-01-01

    The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540

  3. Stereo–SCIDAR System for Improvement of Adaptive Optics Space Debris-tracking Activities

    NASA Astrophysics Data System (ADS)

    Thorn, E.; Korkiakoski, V.; Grosse, D.; Bennet, F.; Rigaut, F.; d'Orgeville, C.; Munro, J.; Smith, C.

    The Research School of Astronomy and Astrophysics (RSAA) in conjunction with the Space Environment Research Center (SERC) has developed a single detector stereo-SCIDAR (SCIntillation Detection And Ranging) system to characteristic atmospheric turbulence. We present the mechanical and optical design, as well as some preliminary results. SERC has a vested interest in space situational awareness (SSA), with a focus on space debris. RSAA is developing adaptive optics (AO) systems to aid in the detection of, ranging to, and orbit propagation of said debris. These AO systems will be directly improved by measurements provided by the usage of the stereo-SCIDAR system developed. SCIDAR is a triangulation technique that utilises a detector to take short exposures of the scintillation pupil patterns of a double star. There is an altitude at which light propagating from these stars passes through the same "patch" of turbulence in Earth's atmosphere: this patch induces wavefront aberrations that are projected onto different regions of the scintillation pupil patterns. An auto-correlation function is employed to extract the height at which the turbulence was introduced into the wavefronts. Unlike stereo-SCIDAR systems developed by other organisations - which utilise a dedicated detector for each of the pupil images - our system will use a pupil-separating prism and a single detector to image both pupils. Using one detector reduces cost as well as design and optical complexity. The system has been installed (in generalised SCIDAR form with a stereo- SCIDAR upgrade scheduled for nest year), tested and operated on the EOS Space Systems' 1.8m debris-ranging telescope at Mount Stromlo, Canberra. Specifically, it was designed to observe double stars separated by 5 to 25 arcseconds with a greater magnitude difference tolerance than conventional SCIDAR, that conventional difference being roughly 2.5. We anticipate taking measurements of turbulent layers up to 15km in altitude with a resolution of approximately 1km. Our system will also be sensitive to ground layer atmospheric turbulence. Here we present details of the optical and mechanical design in addition to preliminary results.

  4. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  5. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  6. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    PubMed

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  7. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity

    PubMed Central

    Frost, William N.; Wang, Jean; Brandon, Christopher J.

    2007-01-01

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations. PMID:17306887

  8. 3D Reconstruction of a Rotating Erupting Prominence

    NASA Technical Reports Server (NTRS)

    Thompson, W. T.; Kliem, B.; Torok, T.

    2011-01-01

    A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight as it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 Angstrom line in EUVI and in the H alpha-sensitive bandpass of COR1 by both STEREO Ahead and Behind. These features could then be traced out in three dimensional space, and reprojected into a view in which the eruption is directed towards the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equals 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg. from the original filament orientation inferred from H alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation is reached within approximately equals 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.

  9. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  10. Upside-down: Perceived space affects object-based attention.

    PubMed

    Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus

    2017-07-01

    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  12. A search for Ganymede stereo images and 3D mapping opportunities

    NASA Astrophysics Data System (ADS)

    Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.

    2017-10-01

    We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.

  13. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  14. The Role of Fixation and Visual Attention in Object Recognition.

    DTIC Science & Technology

    1995-01-01

    computers", Technical Report, Aritificial Intelligence Lab, M.I. T., AI-Memo-915, June 1986. [29] D.P. Huttenlocher and S.Ullman, "Object Recognition Using...attention", Technical Report, Aritificial Intelligence Lab, M.I. T., AI-memo-770, Jan 1984. [35] E.Krotkov, K. Henriksen and R. Kories, "Stereo...MIT Artificial Intelligence Laboratory [ PCTBTBimON STATEMENT X \\ Afipioved tor puciic reieo*«* \\ »?*•;.., jDi*tiibutK» U»lisut»d* 19951004

  15. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  16. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  17. Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.

    PubMed

    Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin

    2018-06-22

    Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.

  18. In Brief: NASA's Phoenix spacecraft lands on Mars

    NASA Astrophysics Data System (ADS)

    Showstack, Randy; Kumar, Mohi

    2008-06-01

    After a 9.5-month, 679-million-kilometer flight from Florida, NASA's Phoenix spacecraft made a soft landing in Vastitas Borealis in Mars's northern polar region on 25 May. The lander, whose camera already has returned some spectacular images, is on a 3-month mission to examine the area and dig into the soil of this site-chosen for its likelihood of having frozen water near the surface-and analyze samples. In addition to a robotic arm and robotic arm camera, the lander's instruments include a surface stereo imager; thermal and evolved-gas analyzer; microscopy, electrochemistry, and conductivity analyzer; and a meteorological station that is tracking daily weather and seasonal changes.

  19. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  20. STEREO Space Weather and the Space Weather Beacon

    NASA Technical Reports Server (NTRS)

    Biesecker, D. A.; Webb, D F.; SaintCyr, O. C.

    2007-01-01

    The Solar Terrestrial Relations Observatory (STEREO) is first and foremost a solar and interplanetary research mission, with one of the natural applications being in the area of space weather. The obvious potential for space weather applications is so great that NOAA has worked to incorporate the real-time data into their forecast center as much as possible. A subset of the STEREO data will be continuously downlinked in a real-time broadcast mode, called the Space Weather Beacon. Within the research community there has been considerable interest in conducting space weather related research with STEREO. Some of this research is geared towards making an immediate impact while other work is still very much in the research domain. There are many areas where STEREO might contribute and we cannot predict where all the successes will come. Here we discuss how STEREO will contribute to space weather and many of the specific research projects proposed to address STEREO space weather issues. We also discuss some specific uses of the STEREO data in the NOAA Space Environment Center.

  1. Tracker Toolkit

    NASA Technical Reports Server (NTRS)

    Lewis, Steven J.; Palacios, David M.

    2013-01-01

    This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).

  2. Ergonomic approaches to designing educational materials for immersive multi-projection system

    NASA Astrophysics Data System (ADS)

    Shibata, Takashi; Lee, JaeLin; Inoue, Tetsuri

    2014-02-01

    Rapid advances in computer and display technologies have made it possible to present high quality virtual reality (VR) environment. To use such virtual environments effectively, research should be performed into how users perceive and react to virtual environment in view of particular human factors. We created a VR simulation of sea fish for science education, and we conducted an experiment to examine how observers perceive the size and depth of an object within their reach and evaluated their visual fatigue. We chose a multi-projection system for presenting the educational VR simulation, because this system can provide actual-size objects and produce stereo images located close to the observer. The results of the experiment show that estimation of size and depth was relatively accurate when subjects used physical actions to assess them. Presenting images within the observer's reach is suggested to be useful for education in VR environment. Evaluation of visual fatigue shows that the level of symptoms from viewing stereo images with a large disparity in VR environment was low in a short time.

  3. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  4. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  5. A data set for evaluating the performance of multi-class multi-object video tracking

    NASA Astrophysics Data System (ADS)

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-05-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.

  6. Real-time object detection, tracking and occlusion reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  7. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  8. An Evaluation of the Effectiveness of Stereo Slides in Teaching Geomorphology.

    ERIC Educational Resources Information Center

    Giardino, John R.; Thornhill, Ashton G.

    1984-01-01

    Provides information about producing stereo slides and their use in the classroom. Describes an evaluation of the teaching effectiveness of stereo slides using two groups of 30 randomly selected students from introductory geomorphology. Results from a pretest/postttest measure show that stereo slides significantly improved understanding. (JM)

  9. Hearing symptoms personal stereos

    PubMed Central

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-01-01

    Summary Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young. PMID:25991931

  10. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  11. Visual object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  12. Structure preserving clustering-object tracking via subgroup motion pattern segmentation

    NASA Astrophysics Data System (ADS)

    Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen

    2018-01-01

    Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.

  13. a Comparison Between Active and Passive Techniques for Underwater 3d Applications

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M.

    2011-09-01

    In the field of 3D scanning, there is an increasing need for more accurate technologies to acquire 3D models of close range objects. Underwater exploration, for example, is very hard to perform due to the hostile conditions and the bad visibility of the environment. Some application fields, like underwater archaeology, require to recover tridimensional data of objects that cannot be moved from their site or touched in order to avoid possible damages. Photogrammetry is widely used for underwater 3D acquisition, because it requires just one or two digital still or video cameras to acquire a sequence of images taken from different viewpoints. Stereo systems composed by a pair of cameras are often employed on underwater robots (i.e. ROVs, Remotely Operated Vehicles) and used by scuba divers, in order to survey archaeological sites, reconstruct complex 3D structures in aquatic environment, estimate in situ the length of marine organisms, etc. The stereo 3D reconstruction is based on the triangulation of corresponding points on the two views. This requires to find in both images common points and to match them (correspondence problem), determining a plane that contains the 3D point on the object. Another 3D technique, frequently used in air acquisition, solves this point-matching problem by projecting structured lighting patterns to codify the acquired scene. The corresponding points are identified associating a binary code in both images. In this work we have tested and compared two whole-field 3D imaging techniques (active and passive) based on stereo vision, in underwater environment. A 3D system has been designed, composed by a digital projector and two still cameras mounted in waterproof housing, so that it can perform the various acquisitions without changing the configuration of optical devices. The tests were conducted in a water tank in different turbidity conditions, on objects with different surface properties. In order to simulate a typical seafloor, we used various concentrations of clay. The performances of the two techniques are described and discussed. In particular, the point clouds obtained are compared in terms of number of acquired 3D points and geometrical deviation.

  14. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  15. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  16. [Virtual reality in ophthalmological education].

    PubMed

    Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J

    2001-04-01

    We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.

  17. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  18. The Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations of the 2011 February 15 CME

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Makela, P.; Yashiro, S.; Davila, J. M.

    2012-08-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009a). The STEREO spacecraft were in qudrature with SOHO (STEREO-A ahead of Earth by 87oand STEREO-B 94obehind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp) and radial speed (Vrad) derived previously from geometrical considerations (Gopalswamy et al. 2009a): Vrad=1/2 (1 + cot w)Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 7 6o, so w=3 8o. This gives the relation as Vrad=1.1 4 Vexp. From LASCO observations, we measured Vexp=897 km/s, so we get the radial speed as 10 2 3 km/s. Direct measurement of radial speed yields 945 km/s (STEREO-A) and 105 8 km/s (STEREO-B). These numbers are different only by 7.6 % and 3.4 % (for STEREO-A and STEREO-B, respectively) from the computed value.

  19. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  20. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm

    PubMed Central

    Tombu, Michael

    2014-01-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704

  1. Computer-aided target tracking in motion analysis studies

    NASA Astrophysics Data System (ADS)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  2. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  3. Multiple-object tracking while driving: the multiple-vehicle tracking task.

    PubMed

    Lochner, Martin J; Trick, Lana M

    2014-11-01

    Many contend that driving an automobile involves multiple-object tracking. At this point, no one has tested this idea, and it is unclear how multiple-object tracking would coordinate with the other activities involved in driving. To address some of the initial and most basic questions about multiple-object tracking while driving, we modified the tracking task for use in a driving simulator, creating the multiple-vehicle tracking task. In Experiment 1, we employed a dual-task methodology to determine whether there was interference between tracking and driving. Findings suggest that although it is possible to track multiple vehicles while driving, driving reduces tracking performance, and tracking compromises headway and lane position maintenance while driving. Modified change-detection paradigms were used to assess whether there were change localization advantages for tracked targets in multiple-vehicle tracking. When changes occurred during a blanking interval, drivers were more accurate (Experiment 2a) and ~250 ms faster (Experiment 2b) at locating the vehicle that changed when it was a target rather than a distractor in tracking. In a more realistic driving task where drivers had to brake in response to the sudden onset of brake lights in one of the lead vehicles, drivers were more accurate at localizing the vehicle that braked if it was a tracking target, although there was no advantage in terms of braking response time. Overall, results suggest that multiple-object tracking is possible while driving and perhaps even advantageous in some situations, but further research is required to determine whether multiple-object tracking is actually used in day-to-day driving.

  4. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  5. The what-where trade-off in multiple-identity tracking.

    PubMed

    Cohen, Michael A; Pinto, Yair; Howe, Piers D L; Horowitz, Todd S

    2011-07-01

    Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801-822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259-290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer's ability to track the objects' locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.

  6. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2006-12-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Also, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross-spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  7. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  8. Re-engineering the stereoscope for the 21st Century

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Hollander, Ari J.

    2007-02-01

    While discussing the current state of stereo head-mounted and 3D projection displays, the authors came to the realization that flat-panel LCD displays offer higher resolution than projection for stereo display at a low (and continually dropping) cost. More specifically, where head-mounted displays of moderate resolution and field-of-view cost tens of thousands of dollars, we can achieve an angular resolution approaching that of the human eye with a field-of-view (FOV) greater than 90° for less than $1500. For many immersive applications head tracking is unnecessary and sometimes even undesirable, and a low cost/high quality wide FOV display may significantly increase the application space for 3D display. After outlining the problem and potential of this solution we describe the initial construction of a simple Wheatstone stereoscope using 24" LCD displays and then show engineering improvements that increase the FOV and usability of the system. The applicability of a high-immersion, high-resolution display for art, entertainment, and simulation is presented along with a content production system that utilizes the capabilities of the system. We then discuss the potential use of the system for VR pain control therapy, treatment of post-traumatic stress disorders and other serious games applications.

  9. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery

    PubMed Central

    Reichard, Daniel; Bodenstedt, Sebastian; Suwelack, Stefan; Mayer, Benjamin; Preukschas, Anas; Wagner, Martin; Kenngott, Hannes; Müller-Stich, Beat; Dillmann, Rüdiger; Speidel, Stefanie

    2015-01-01

    Abstract. The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration. PMID:26693166

  10. An MHD 3-D solution to the evolution of a CME observed by the STEREO mission on May 2007

    NASA Astrophysics Data System (ADS)

    Berdichevsky, D. B.; Stenborg, G. A.

    2009-12-01

    Nature offers a variety of examples on the dynamics of matter trapped electromagnetic fields. In particular, sudden ejections of large amounts of solar mass embedded in magnetic field structures develop in the heliosphere, their evolution being affected by the background solar wind. Their plasma and magnetic field values can be obtained by in-situ instruments onboard existing space missions. A particular example of such process is the passage of a magnetic field flux tube-like structure (~ 0.1 AU in cross section) exhibiting a flux-rope topology observed on May 2007 with their in-situ instruments by the Venus Express and Messenger missions. STEREO remote observations obtained with the SECCHI instruments allowed the tracking of this quite weak event from its origins in the Sun to approximately the orbit of Mercury. In this work, we i) discuss on the dynamic evolution of the event as described by the magnetic force-free magneto-hydrodynamic solution proposed in [1], and ii) generalize it to add curvature to the MHD solution. The magneto-hydrodynamic analytical solution obtained allows us to make quantitative estimates on the size of the flux tube just after the ejection, magnetic field intensity, and mass density. [1] Berdichevsky, DB, RP Lepping, and CJ Farrugia, Phys Rev E, 67(3), 036405, 2003.

  11. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  12. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  13. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  14. Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Marchant, W.

    2011-12-01

    The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.

  15. Stereoscopy and the Human Visual System

    PubMed Central

    Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.

    2012-01-01

    Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596

  16. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  17. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  18. Can we track holes?

    PubMed Central

    Horowitz, Todd S.; Kuzmova, Yoana

    2011-01-01

    The evidence is mixed as to whether the visual system treats objects and holes differently. We used a multiple object tracking task to test the hypothesis that figural objects are easier to track than holes. Observers tracked four of eight items (holes or objects). We used an adaptive algorithm to estimate the speed allowing 75% tracking accuracy. In Experiments 1–5, the distinction between holes and figures was accomplished by pictorial cues, while red-cyan anaglyphs were used to provide the illusion of depth in Experiment 6. We variously used Gaussian pixel noise, photographic scenes, or synthetic textures as backgrounds. Tracking was more difficult when a complex background was visible, as opposed to a blank background. Tracking was easier when disks carried fixed, unique markings. When these factors were controlled for, tracking holes was no more difficult than tracking figures, suggesting that they are equivalent stimuli for tracking purposes. PMID:21334361

  19. Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking

    PubMed Central

    Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka

    2017-01-01

    Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430

  20. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  1. Image-size differences worsen stereopsis independent of eye position

    PubMed Central

    Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.

    2010-01-01

    With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927

  2. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  3. Mobile robot sense net

    NASA Astrophysics Data System (ADS)

    Konolige, Kurt G.; Gutmann, Steffen; Guzzoni, Didier; Ficklin, Robert W.; Nicewarner, Keith E.

    1999-08-01

    Mobile robot hardware and software is developing to the point where interesting applications for groups of such robots can be contemplated. We envision a set of mobots acting to map and perform surveillance or other task within an indoor environment (the Sense Net). A typical application of the Sense Net would be to detect survivors in buildings damaged by earthquake or other disaster, where human searchers would be put a risk. As a team, the Sense Net could reconnoiter a set of buildings faster, more reliably, and more comprehensibly than an individual mobot. The team, for example, could dynamically form subteams to perform task that cannot be done by individual robots, such as measuring the range to a distant object by forming a long baseline stereo sensor form a pari of mobots. In addition, the team could automatically reconfigure itself to handle contingencies such as disabled mobots. This paper is a report of our current progress in developing the Sense Net, after the first year of a two-year project. In our approach, each mobot has sufficient autonomy to perform several tasks, such as mapping unknown areas, navigating to specific positions, and detecting, tracking, characterizing, and classifying human and vehicular activity. We detail how some of these tasks are accomplished, and how the mobot group is tasked.

  4. Population of SOHO/STEREO Kreutz sungrazers and the arrival of comet C/2011 W3 (Lovejoy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekanina, Zdenek; Kracht, Rainer, E-mail: Zdenek.Sekanina@jpl.nasa.gov, E-mail: r.kracht@t-online.de

    2013-11-20

    We examine properties of the population of SOHO/STEREO (dwarf) Kreutz sungrazing comets from 2004 to 2013, including the arrival rates, peculiar gaps, and a potential relationship to the spectacular comet C/2011 W3 (Lovejoy). Selection effects, influencing the observed distribution, are largely absent among bright dwarf sungrazers, whose temporal sequence implies the presence of a swarm, with objects brighter at maximum than an apparent magnitude of 3 arriving at a peak rate of ∼4.6 yr{sup –1} in late 2010, while those brighter than magnitude 2 arrived at a peak rate of ∼4.3 yr{sup –1} in early 2011, both a few timesmore » the pre-swarm rate. The entire population of SOHO/STEREO Kreutz sungrazers also peaked about one year before the appearance of C/2011 W3. Orbital data show, however, that a great majority of bright dwarf sungrazers moved in paths similar to that of comet C/1843 D1, deviating 10° or more from the orbit of C/2011 W3 in the angular elements. The evidence from the swarm and the overall elevated arrival rates suggests the existence of a fragmented sizable sungrazer that shortly preceded C/2011 W3 but was independent of it. On the other hand, these findings represent another warning signal that the expected 21st century cluster of spectacular Kreutz comets is on its way to perihelion, to arrive during the coming decades. It is only in this sense that we find a parallel link between C/2011 W3 and the spikes in the population of SOHO/STEREO Kreutz sungrazers.« less

  5. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  6. A z-Vertex Trigger for Belle II

    NASA Astrophysics Data System (ADS)

    Skambraks, S.; Abudinén, F.; Chen, Y.; Feindt, M.; Frühwirth, R.; Heck, M.; Kiesling, C.; Knoll, A.; Neuhaus, S.; Paul, S.; Schieck, J.

    2015-08-01

    The Belle II experiment will go into operation at the upgraded SuperKEKB collider in 2016. SuperKEKB is designed to deliver an instantaneous luminosity L = 8 ×1035 cm - 2 s - 1. The experiment will therefore have to cope with a much larger machine background than its predecessor Belle, in particular from events outside of the interaction region. We present the concept of a track trigger, based on a neural network approach, that is able to suppress a large fraction of this background by reconstructing the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger uses the hit information from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (“sectors”), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track. Within the sector, the z-vertex is estimated by a specialized neural network, with the drift times from the CDC as input and a continuous output corresponding to the scaled z-vertex. The neural algorithm will be implemented in programmable hardware. To this end a Virtex 7 FPGA board will be used, which provides at present the most promising solution for a fully parallelized implementation of neural networks or alternative multivariate methods. A high speed interface for external memory will be integrated into the platform, to be able to store the O(109) parameters required. The contribution presents the results of our feasibility studies and discusses the details of the envisaged hardware solution.

  7. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    PubMed

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  8. Object tracking using plenoptic image sequences

    NASA Astrophysics Data System (ADS)

    Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung

    2017-05-01

    Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.

  9. Mastcam Stereo Analysis and Mosaics (MSAM)

    NASA Astrophysics Data System (ADS)

    Deen, R. G.; Maki, J. N.; Algermissen, S. S.; Abarca, H. E.; Ruoff, N. A.

    2017-06-01

    Describes a new PDART task that will generate stereo analysis products (XYZ, slope, etc.), terrain meshes, and mosaics (stereo, ortho, and Mast/Nav combos) for all MSL Mastcam images and deliver the results to PDS.

  10. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Virtual rigid body: a new optical tracking paradigm in image-guided interventions

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Lee, David S.; Deshmukh, Nishikant; Boctor, Emad M.

    2015-03-01

    Tracking technology is often necessary for image-guided surgical interventions. Optical tracking is one the options, but it suffers from line of sight and workspace limitations. Optical tracking is accomplished by attaching a rigid body marker, having a pattern for pose detection, onto a tool or device. A larger rigid body results in more accurate tracking, but at the same time large size limits its usage in a crowded surgical workspace. This work presents a prototype of a novel optical tracking method using a virtual rigid body (VRB). We define the VRB as a 3D rigid body marker in the form of pattern on a surface generated from a light source. Its pose can be recovered by observing the projected pattern with a stereo-camera system. The rigid body's size is no longer physically limited as we can manufacture small size light sources. Conventional optical tracking also requires line of sight to the rigid body. VRB overcomes these limitations by detecting a pattern projected onto the surface. We can project the pattern onto a region of interest, allowing the pattern to always be in the view of the optical tracker. This helps to decrease the occurrence of occlusions. This manuscript describes the method and results compared with conventional optical tracking in an experiment setup using known motions. The experiments are done using an optical tracker and a linear-stage, resulting in targeting errors of 0.38mm+/-0.28mm with our method compared to 0.23mm+/-0.22mm with conventional optical markers. Another experiment that replaced the linear stage with a robot arm resulted in rotational errors of 0.50+/-0.31° and 2.68+/-2.20° and the translation errors of 0.18+/-0.10 mm and 0.03+/-0.02 mm respectively.

  12. Boundary Layer Remote Sensing with Combined Active and Passive Techniques: GPS Radio Occultation and High-Resolution Stereo Imaging (WindCam) Small Satellite Concept

    NASA Technical Reports Server (NTRS)

    Mannucci, A.J.; Wu, D.L.; Teixeira, J.; Ao, C.O.; Xie, F.; Diner, D.J.; Wood, R.; Turk, Joe

    2012-01-01

    Objective: significant progress in understanding low-cloud boundary layer processes. This is the Single largest uncertainty in climate projections. Radio occultation has unique features suited to boundary layer remote sensing (1) Cloud penetrating (2) Very high vertical resolution (approximately 50m-100m) (3) Sensitivity to thermodynamic variables

  13. The Use of Sun Elevation Angle for Stereogrammetric Boreal Forest Height in Open Canopies

    NASA Technical Reports Server (NTRS)

    Montesano, Paul M.; Neigh, Christopher; Sun, Guoqing; Duncanson, Laura Innice; Van Den Hoek, Jamon; Ranson, Kenneth Jon

    2017-01-01

    Stereogrammetry applied to globally available high resolution spaceborne imagery (HRSI; less than 5 m spatial resolution) yields fine-scaled digital surface models (DSMs) of elevation. These DSMs may represent elevations that range from the ground to the vegetation canopy surface, are produced from stereoscopic image pairs (stereo pairs) that have a variety of acquisition characteristics, and have been coupled with lidar data of forest structure and ground surface elevation to examine forest height. This work explores surface elevations from HRSI DSMs derived from two types of acquisitions in open canopy forests. We (1) apply an automated mass-production stereogrammetry workflow to along-track HRSI stereo pairs, (2) identify multiple spatially coincident DSMs whose stereo pairs were acquired under different solar geometry, (3) vertically co-register these DSMs using coincident spaceborne lidar footprints (from ICESat-GLAS) as reference, and(4) examine differences in surface elevations between the reference lidar and the co-registered HRSI DSMs associated with two general types of acquisitions (DSM types) from different sun elevation angles. We find that these DSM types, distinguished by sun elevation angle at the time of stereo pair acquisition, are associated with different surface elevations estimated from automated stereogrammetry in open canopy forests. For DSM values with corresponding reference ground surface elevation from spaceborne lidar footprints in open canopy northern Siberian Larix forests with slopes less than10, our results show that HRSI DSM acquired with sun elevation angles greater than 35deg and less than 25deg (during snow-free conditions) produced characteristic and consistently distinct distributions of elevation differences from reference lidar. The former include DSMs of near-ground surfaces with root mean square errors less than 0.68 m relative to lidar. The latter, particularly those with angles less than 10deg, show distributions with larger differences from lidar that are associated with open canopy forests whose vegetation surface elevations are captured. Terrain aspect did not have a strong effect on the distribution of vegetation surfaces. Using the two DSM types together, the distribution of DSM-differenced heights in forests (6.0 m, sigma = 1.4 m) was consistent with the distribution of plot-level mean tree heights (6.5m, sigma = 1.2 m). We conclude that the variation in sun elevation angle at time of stereo pair acquisition can create illumination conditions conducive for capturing elevations of surfaces either near the ground or associated with vegetation canopy. Knowledge of HRSI acquisition solar geometry and snow cover can be used to understand and combine stereogrammetric surface elevation estimates to co-register rand difference overlapping DSMs, providing a means to map forest height at fine scales, resolving the vertical structure of groups of trees from spaceborne platforms in open canopy forests.

  14. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  15. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  16. A Mobile Service Oriented Multiple Object Tracking Augmented Reality Architecture for Education and Learning Experiences

    ERIC Educational Resources Information Center

    Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul

    2014-01-01

    This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…

  17. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    PubMed

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  18. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    DTIC Science & Technology

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  19. A comparison of static near stereo acuity in youth baseball/softball players and non-ball players.

    PubMed

    Boden, Lauren M; Rosengren, Kenneth J; Martin, Daniel F; Boden, Scott D

    2009-03-01

    Although many aspects of vision have been investigated in professional baseball players, few studies have been performed in developing athletes. The issue of whether youth baseball players have superior stereopsis to nonplayers has not been addressed specifically. The purpose of this study was to determine if youth baseball/softball players have better stereo acuity than non-ball players. Informed consent was obtained from 51 baseball/softball players and 52 non-ball players (ages 10 to 18 years). Subjects completed a questionnaire, and their static near stereo acuity was measured using the Randot Stereotest (Stereo Optical Company, Chicago, Illinois). Stereo acuity was measured as the seconds of arc between the last pair of images correctly distinguished by the subject. The mean stereo acuity score was 25.5 +/- 1.7 seconds of arc in the baseball/softball players and 56.2 +/- 8.4 seconds of arc in the non-ball players. This difference was statistically significant (P < 0.00001). In addition, a perfect stereo acuity score of 20 seconds of arc was seen in 61% of the ball players and only 23% of the non-ball players (P = 0.0001). Youth baseball/softball players had significantly better static stereo acuity than non-ball players, comparable to professional ball players.

  20. Motion-oriented high speed 3-D measurements by binocular fringe projection using binary aperiodic patterns.

    PubMed

    Feng, Shijie; Chen, Qian; Zuo, Chao; Tao, Tianyang; Hu, Yan; Asundi, Anand

    2017-01-23

    Fringe projection is an extensively used technique for high speed three-dimensional (3-D) measurements of dynamic objects. To precisely retrieve a moving object at pixel level, researchers prefer to project a sequence of fringe images onto its surface. However, the motion often leads to artifacts in reconstructions due to the sequential recording of the set of patterns. In order to reduce the adverse impact of the movement, we present a novel high speed 3-D scanning technique combining the fringe projection and stereo. Firstly, promising measuring speed is achieved by modifying the traditional aperiodic sinusoidal patterns so that the fringe images can be cast at kilohertz with the widely used defocusing strategy. Next, a temporal intensity tracing algorithm is developed to further alleviate the influence of motion by accurately tracing the ideal intensity for stereo matching. Then, a combined cost measure is suggested to robustly estimate the cost for each pixel and lastly a three-step framework of refinement follows not only to eliminate outliers caused by the motion but also to obtain sub-pixel disparity results for 3-D reconstructions. In comparison with the traditional method where the effect of motion is not considered, experimental results show that the reconstruction accuracy for dynamic objects can be improved by an order of magnitude with the proposed method.

Top