Sample records for stereo-based markerless human

  1. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  2. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  3. A markerless system based on smartphones and webcam for the measure of step length, width and duration on treadmill.

    PubMed

    Barone, V; Verdini, F; Burattini, L; Di Nardo, F; Fioretti, S

    2016-03-01

    A markerless low cost prototype has been developed for the determination of some spatio-temporal parameters of human gait: step-length, step-width and cadence have been considered. Only a smartphone and a high-definition webcam have been used. The signals obtained by the accelerometer embedded in the smartphone are used to recognize the heel strike events, while the feet positions are calculated through image processing of the webcam stream. Step length and width are computed during gait trials on a treadmill at various speeds (3, 4 and 5 km/h). Six subjects have been tested for a total of 504 steps. Results were compared with those obtained by a stereo-photogrammetric system (Elite, BTS Engineering). The maximum average errors were 3.7 cm (5.36%) for the right step length and 1.63 cm (15.16%) for the right step width at 5 km/h. The maximum average error for step duration was 0.02 s (1.69%) at 5 km/h for the right steps. The system is characterized by a very high level of automation that allows its use by non-expert users in non-structured environments. A low cost system able to automatically provide a reliable and repeatable evaluation of some gait events and parameters during treadmill walking, is relevant also from a clinical point of view because it allows the analysis of hundreds of steps and consequently an analysis of their variability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    NASA Astrophysics Data System (ADS)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  5. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    PubMed

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  6. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  7. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals

    NASA Astrophysics Data System (ADS)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-03-01

    Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p  <  0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.

  8. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  9. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  10. Establishment of a Cre recombinase based mutagenesis protocol for markerless gene deletion in Streptococcus suis.

    PubMed

    Koczula, A; Willenborg, J; Bertram, R; Takamatsu, D; Valentin-Weigand, P; Goethe, R

    2014-12-01

    The lack of knowledge about pathogenicity mechanisms of Streptococcus (S.) suis is, at least partially, attributed to limited methods for its genetic manipulation. Here, we established a Cre-lox based recombination system for markerless gene deletions in S. suis serotype 2 with high selective pressure and without undesired side effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  12. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo

    NASA Astrophysics Data System (ADS)

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery.

  13. Markerless laser registration in image-guided oral and maxillofacial surgery.

    PubMed

    Marmulla, Rüdiger; Lüth, Tim; Mühling, Joachim; Hassfeld, Stefan

    2004-07-01

    The use of registration markers in computer-assisted surgery is combined with high logistic costs and efforts. Markerless patient registration using laser scan surface registration techniques is a new challenging method. The present study was performed to evaluate the clinical accuracy in finding defined target points within the surgical site after markerless patient registration in image-guided oral and maxillofacial surgery. Twenty consecutive patients with different cranial diseases were scheduled for computer-assisted surgery. Data set alignment between the surgical site and the computed tomography (CT) data set was performed by markerless laser scan surface registration of the patient's face. Intraoral rigidly attached registration markers were used as target points, which had to be detected by an infrared pointer. The Surgical Segment Navigator SSN++ has been used for all procedures. SSN++ is an investigative product based on the SSN system that had previously been developed by the presenting authors with the support of Carl Zeiss (Oberkochen, Germany). SSN++ is connected to a Polaris infrared camera (Northern Digital, Waterloo, Ontario, Canada) and to a Minolta VI 900 3D digitizer (Tokyo, Japan) for high-resolution laser scanning. Minimal differences in shape between the laser scan surface and the surface generated from the CT data set could be detected. Nevertheless, high-resolution laser scan of the skin surface allows for a precise patient registration (mean deviation 1.1 mm, maximum deviation 1.8 mm). Radiation load, logistic costs, and efforts arising from the planning of computer-assisted surgery of the head can be reduced because native (markerless) CT data sets can be used for laser scan-based surface registration.

  14. Three-dimensional displays and stereo vision

    PubMed Central

    Westheimer, Gerald

    2011-01-01

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023

  15. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo.

    PubMed

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Implementation of Augmented Reality Technology in Sangiran Museum with Vuforia

    NASA Astrophysics Data System (ADS)

    Purnomo, F. A.; Santosa, P. I.; Hartanto, R.; Pratisto, E. H.; Purbayu, A.

    2018-03-01

    Archaeological object is an evidence of life on ancient relics which has a lifespan of millions years ago. The discovery of this ancient object by the Museum Sangiran then is preserved and protected from potential damage. This research will develop Augmented Reality application for the museum that display a virtual information from ancient object on display. The content includes information as text, audio, and animation of 3D model as a representation of the ancient object. This study emphasizes the 3D Markerless recognition process by using Vuforia Augmented Reality (AR) system so that visitor can access the exhibition objects through different viewpoints. Based on the test result, by registering image target with 25o angle interval, 3D markerless keypoint feature can be detected with different viewpoint. The device must meet minimal specifications of Dual Core 1.2 GHz processor, GPU Power VR SG5X, 8 MP auto focus camera and 1 GB of memory to run the application. The average success of the AR application detects object in museum exhibition to 3D Markerless with a single view by 40%, Markerless multiview by 86% (for angle 0° - 180°) and 100% (for angle 0° - 360°). Application detection distance is between 23 cm and up to 540 cm with the response time to detect 3D Markerless has 12 seconds in average.

  17. Fast Markerless Tracking for Augmented Reality in Planar Environment

    NASA Astrophysics Data System (ADS)

    Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim

    2015-12-01

    Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.

  18. Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking

    PubMed Central

    Azari, David P.; Pugh, Carla M.; Laufer, Shlomi; Kwan, Calvin; Chen, Chia-Hsiung; Yen, Thomas Y.; Hu, Yu Hen; Radwin, Robert G.

    2016-01-01

    Objective This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). Background There are currently no standardized and widely accepted CBE screening techniques. Methods Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. Results Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. Conclusions Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. Application Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment. PMID:26546381

  19. Owls see in stereo much like humans do.

    PubMed

    van der Willigen, Robert F

    2011-06-10

    While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.

  20. A learning-based markerless approach for full-body kinematics estimation in-natura from a single image.

    PubMed

    Drory, Ami; Li, Hongdong; Hartley, Richard

    2017-04-11

    We present a supervised machine learning approach for markerless estimation of human full-body kinematics for a cyclist from an unconstrained colour image. This approach is motivated by the limitations of existing marker-based approaches restricted by infrastructure, environmental conditions, and obtrusive markers. By using a discriminatively learned mixture-of-parts model, we construct a probabilistic tree representation to model the configuration and appearance of human body joints. During the learning stage, a Structured Support Vector Machine (SSVM) learns body parts appearance and spatial relations. In the testing stage, the learned models are employed to recover body pose via searching in a test image over a pyramid structure. We focus on the movement modality of cycling to demonstrate the efficacy of our approach. In natura estimation of cycling kinematics using images is challenging because of human interaction with a bicycle causing frequent occlusions. We make no assumptions in relation to the kinematic constraints of the model, nor the appearance of the scene. Our technique finds multiple quality hypotheses for the pose. We evaluate the precision of our method on two new datasets using loss functions. Our method achieves a score of 91.1 and 69.3 on mean Probability of Correct Keypoint (PCK) measure and 88.7 and 66.1 on the Average Precision of Keypoints (APK) measure for the frontal and sagittal datasets respectively. We conclude that our method opens new vistas to robust user-interaction free estimation of full body kinematics, a prerequisite to motion analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Study on Markerless AR-Based Infant Education System Using CBIR

    NASA Astrophysics Data System (ADS)

    Lim, Ji-Hoon; Kim, Seoksoo

    Block play is widely known to be effective to help a child develop emotionally and physically based on learning by a sense of sight and touch. But block play can not expect to have learning effects through a sense of hearing. Therefore, in this study, such limitations are overcome by a method that recognizes an object made up of blocks, not a marker-based method generally used for an AR environment, a matching technology enabling an object to be perceived in every direction, and a technology combining images of the real world with 2D/3D images/pictures/sounds of a similar object. Also, an education system for children aged 3~5 is designed to implement markerless AR with the CBIR method.

  2. A Real-Time Augmented Reality System to See-Through Cars.

    PubMed

    Rameau, Francois; Ha, Hyowon; Joo, Kyungdon; Choi, Jinsoo; Park, Kibaek; Kweon, In So

    2016-11-01

    One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicle's driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.

  3. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Cloning-independent markerless gene editing in Streptococcus sanguinis: novel insights in type IV pilus biology.

    PubMed

    Gurung, Ishwori; Berry, Jamie-Lee; Hall, Alexander M J; Pelicic, Vladimir

    2017-04-07

    Streptococcus sanguinis, a naturally competent opportunistic human pathogen, is a Gram-positive workhorse for genomics. It has recently emerged as a model for the study of type IV pili (Tfp)-exceptionally widespread and important prokaryotic filaments. To enhance genetic manipulation of Streptococcus sanguinis, we have developed a cloning-independent methodology, which uses a counterselectable marker and allows sophisticated markerless gene editing in situ. We illustrate the utility of this methodology by answering several questions regarding Tfp biology by (i) deleting single or mutiple genes, (ii) altering specific bases in genes of interest, and (iii) engineering genes to encode proteins with appended affinity tags. We show that (i) the last six genes in the pil locus harbouring all the genes dedicated to Tfp biology play no role in piliation or Tfp-mediated motility, (ii) two highly conserved Asp residues are crucial for enzymatic activity of the prepilin peptidase PilD and (iii) that pilin subunits with a C-terminally appended hexa-histidine (6His) tag are still assembled into functional Tfp. The methodology for genetic manipulation we describe here should be broadly applicable. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Cloning-independent markerless gene editing in Streptococcus sanguinis: novel insights in type IV pilus biology

    PubMed Central

    Gurung, Ishwori; Berry, Jamie-Lee; Hall, Alexander M. J.

    2017-01-01

    Abstract Streptococcus sanguinis, a naturally competent opportunistic human pathogen, is a Gram-positive workhorse for genomics. It has recently emerged as a model for the study of type IV pili (Tfp)—exceptionally widespread and important prokaryotic filaments. To enhance genetic manipulation of Streptococcus sanguinis, we have developed a cloning-independent methodology, which uses a counterselectable marker and allows sophisticated markerless gene editing in situ. We illustrate the utility of this methodology by answering several questions regarding Tfp biology by (i) deleting single or mutiple genes, (ii) altering specific bases in genes of interest, and (iii) engineering genes to encode proteins with appended affinity tags. We show that (i) the last six genes in the pil locus harbouring all the genes dedicated to Tfp biology play no role in piliation or Tfp-mediated motility, (ii) two highly conserved Asp residues are crucial for enzymatic activity of the prepilin peptidase PilD and (iii) that pilin subunits with a C-terminally appended hexa-histidine (6His) tag are still assembled into functional Tfp. The methodology for genetic manipulation we describe here should be broadly applicable. PMID:27903891

  6. A Single Camera Motion Capture System for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  7. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  8. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  9. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    PubMed

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.

  10. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  11. Markerless identification of key events in gait cycle using image flow.

    PubMed

    Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn

    2012-01-01

    Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.

  12. An automatic eye detection and tracking technique for stereo video sequences

    NASA Astrophysics Data System (ADS)

    Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim

    2009-05-01

    Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.

  13. Letter regarding 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics' by Patrizi et al. and research reproducibility.

    PubMed

    2017-04-01

    The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.

  14. Simple Method for Markerless Gene Deletion in Multidrug-Resistant Acinetobacter baumannii

    PubMed Central

    Oh, Man Hwan; Lee, Je Chul; Kim, Jungmin

    2015-01-01

    The traditional markerless gene deletion technique based on overlap extension PCR has been used for generating gene deletions in multidrug-resistant Acinetobacter baumannii. However, the method is time-consuming because it requires restriction digestion of the PCR products in DNA cloning and the construction of new vectors containing a suitable antibiotic resistance cassette for the selection of A. baumannii merodiploids. Moreover, the availability of restriction sites and the selection of recombinant bacteria harboring the desired chimeric plasmid are limited, making the construction of a chimeric plasmid more difficult. We describe a rapid and easy cloning method for markerless gene deletion in A. baumannii, which has no limitation in the availability of restriction sites and allows for easy selection of the clones carrying the desired chimeric plasmid. Notably, it is not necessary to construct new vectors in our method. This method utilizes direct cloning of blunt-end DNA fragments, in which upstream and downstream regions of the target gene are fused with an antibiotic resistance cassette via overlap extension PCR and are inserted into a blunt-end suicide vector developed for blunt-end cloning. Importantly, the antibiotic resistance cassette is placed outside the downstream region in order to enable easy selection of the recombinants carrying the desired plasmid, to eliminate the antibiotic resistance cassette via homologous recombination, and to avoid the necessity of constructing new vectors. This strategy was successfully applied to functional analysis of the genes associated with iron acquisition by A. baumannii ATCC 19606 and to ompA gene deletion in other A. baumannii strains. Consequently, the proposed method is invaluable for markerless gene deletion in multidrug-resistant A. baumannii. PMID:25746991

  15. Cloning-Independent and Counterselectable Markerless Mutagenesis System in Streptococcus mutans▿

    PubMed Central

    Xie, Zhoujie; Okinaga, Toshinori; Qi, Fengxia; Zhang, Zhijun; Merritt, Justin

    2011-01-01

    Insertion duplication mutagenesis and allelic replacement mutagenesis are among the most commonly utilized approaches for targeted mutagenesis in bacteria. However, both techniques are limited by a variety of factors that can complicate mutant phenotypic studies. To circumvent these limitations, multiple markerless mutagenesis techniques have been developed that utilize either temperature-sensitive plasmids or counterselectable suicide vectors containing both positive- and negative-selection markers. For many species, these techniques are not especially useful due to difficulties of cloning with Escherichia coli and/or a lack of functional negative-selection markers. In this study, we describe the development of a novel approach for the creation of markerless mutations. This system employs a cloning-independent methodology and should be easily adaptable to a wide array of Gram-positive and Gram-negative bacterial species. The entire process of creating both the counterselection cassette and mutation constructs can be completed using overlapping PCR protocols, which allows extremely quick assembly and eliminates the requirement for either temperature-sensitive replicons or suicide vectors. As a proof of principle, we used Streptococcus mutans reference strain UA159 to create markerless in-frame deletions of 3 separate bacteriocin genes as well as triple mutants containing all 3 deletions. Using a panel of 5 separate wild-type S. mutans strains, we further demonstrated that the procedure is nearly 100% efficient at generating clones with the desired markerless mutation, which is a considerable improvement in yield compared to existing approaches. PMID:21948849

  16. Markerless video analysis for movement quantification in pediatric epilepsy monitoring.

    PubMed

    Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling

    2011-01-01

    This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  17. Evaluation of Hands-On Clinical Exam Performance Using Marker-less Video Tracking.

    PubMed

    Azari, David; Pugh, Carla; Laufer, Shlomi; Cohen, Elaine; Kwan, Calvin; Chen, Chia-Hsiung Eric; Yen, Thomas Y; Hu, Yu Hen; Radwin, Robert

    2014-09-01

    This study investigates the potential of using marker-less video tracking of the hands for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that simulates different clinical presentations. Videos were made of the clinician's hands during the exam and video processing software for tracking hand motion to quantify hand motion kinematics was used. Practitioner motion patterns indicated consistent behavior of participants across multiple pathologies. Different pathologies exhibited characteristic motion patterns in the aggregate at specific parts of an exam, indicating consistent inter-participant behavior. Marker-less video kinematic tracking therefore shows promise in discriminating between different examination procedures, clinicians, and pathologies.

  18. Stereoscopy and the Human Visual System

    PubMed Central

    Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.

    2012-01-01

    Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596

  19. Modeling the convergence accommodation of stereo vision for binocular endoscopy.

    PubMed

    Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin

    2018-02-01

    The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.

  20. NOTE: A feasibility study of markerless fluoroscopic gating for lung cancer radiotherapy using 4DCT templates

    NASA Astrophysics Data System (ADS)

    Li, Ruijiang; Lewis, John H.; Cerviño, Laura I.; Jiang, Steve B.

    2009-10-01

    A major difficulty in conformal lung cancer radiotherapy is respiratory organ motion, which may cause clinically significant targeting errors. Respiratory-gated radiotherapy allows for more precise delivery of prescribed radiation dose to the tumor, while minimizing normal tissue complications. Gating based on external surrogates is limited by its lack of accuracy, while gating based on implanted fiducial markers is limited primarily by the risk of pneumothorax due to marker implantation. Techniques for fluoroscopic gating without implanted fiducial markers (markerless gating) have been developed. These techniques usually require a training fluoroscopic image dataset with marked tumor positions in the images, which limits their clinical implementation. To remove this requirement, this study presents a markerless fluoroscopic gating algorithm based on 4DCT templates. To generate gating signals, we explored the application of three similarity measures or scores between fluoroscopic images and the reference 4DCT template: un-normalized cross-correlation (CC), normalized cross-correlation (NCC) and normalized mutual information (NMI), as well as average intensity (AI) of the region of interest (ROI) in the fluoroscopic images. Performance was evaluated using fluoroscopic and 4DCT data from three lung cancer patients. On average, gating based on CC achieves the highest treatment accuracy given the same efficiency, with a high target coverage (average between 91.9% and 98.6%) for a wide range of nominal duty cycles (20-50%). AI works well for two patients out of three, but failed for the third patient due to interference from the heart. Gating based on NCC and NMI usually failed below 50% nominal duty cycle. Based on this preliminary study with three patients, we found that the proposed CC-based gating algorithm can generate accurate and robust gating signals when using 4DCT reference template. However, this observation is based on results obtained from a very limited dataset, and further investigation on a larger patient population has to be done before its clinical implementation.

  1. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  2. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  3. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    PubMed

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  4. Recombineering in Streptococcus mutans Using Direct Repeat-Mediated Cloning-Independent Markerless Mutagenesis (DR-CIMM).

    PubMed

    Zhang, Shan; Zou, Zhengzhong; Kreth, Jens; Merritt, Justin

    2017-01-01

    Studies of the dental caries pathogen Streptococcus mutans have benefitted tremendously from its sophisticated genetic system. As part of our own efforts to further improve upon the S. mutans genetic toolbox, we previously reported the development of the first cloning-independent markerless mutagenesis (CIMM) system for S. mutans and illustrated how this approach could be adapted for use in many other organisms. The CIMM approach only requires overlap extension PCR (OE-PCR) protocols to assemble counterselectable allelic replacement mutagenesis constructs, and thus greatly increased the speed and efficiency with which markerless mutations could be introduced into S. mutans . Despite its utility, the system is still subject to a couple limitations. Firstly, CIMM requires negative selection with the conditionally toxic phenylalanine analog p -chlorophenylalanine (4-CP), which is efficient, but never perfect. Typically, 4-CP negative selection results in a small percentage of naturally resistant background colonies. Secondly, CIMM requires two transformation steps to create markerless mutants. This can be inherently problematic if the transformability of the strain is negatively impacted after the first transformation step, which is used to insert the counterselection cassette at the mutation site on the chromosome. In the current study, we develop a next-generation counterselection cassette that eliminates 4-CP background resistance and combine this with a new direct repeat-mediated cloning-independent markerless mutagenesis (DR-CIMM) system to specifically address the limitations of the prior approach. DR-CIMM is even faster and more efficient than CIMM for the creation of all types of deletions, insertions, and point mutations and is similarly adaptable for use in a wide range of genetically tractable bacteria.

  5. Cre/lox-based multiple markerless gene disruption in the genome of the extreme thermophile Thermus thermophilus.

    PubMed

    Togawa, Yoichiro; Nunoshiba, Tatsuo; Hiratsu, Keiichiro

    2018-02-01

    Markerless gene-disruption technology is particularly useful for effective genetic analyses of Thermus thermophilus (T. thermophilus), which have a limited number of selectable markers. In an attempt to develop a novel system for the markerless disruption of genes in T. thermophilus, we applied a Cre/lox system to construct a triple gene disruptant. To achieve this, we constructed two genetic tools, a loxP-htk-loxP cassette and cre-expressing plasmid, pSH-Cre, for gene disruption and removal of the selectable marker by Cre-mediated recombination. We found that the Cre/lox system was compatible with the proliferation of the T. thermophilus HB27 strain at the lowest growth temperature (50 °C), and thus succeeded in establishing a triple gene disruptant, the (∆TTC1454::loxP, ∆TTC1535KpnI::loxP, ∆TTC1576::loxP) strain, without leaving behind a selectable marker. During the process of the sequential disruption of multiple genes, we observed the undesired deletion and inversion of the chromosomal region between multiple loxP sites that were induced by Cre-mediated recombination. Therefore, we examined the effects of a lox66-htk-lox71 cassette by exploiting the mutant lox sites, lox66 and lox71, instead of native loxP sites. We successfully constructed a (∆TTC1535::lox72, ∆TTC1537::lox72) double gene disruptant without inducing the undesired deletion of the 0.7-kbp region between the two directly oriented lox72 sites created by the Cre-mediated recombination of the lox66-htk-lox71 cassette. This is the first demonstration of a Cre/lox system being applicable to extreme thermophiles in a genetic manipulation. Our results indicate that this system is a powerful tool for multiple markerless gene disruption in T. thermophilus.

  6. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  7. Key characteristics of specular stereo

    PubMed Central

    Muryy, Alexander A.; Fleming, Roland W.; Welchman, Andrew E.

    2014-01-01

    Because specular reflection is view-dependent, shiny surfaces behave radically differently from matte, textured surfaces when viewed with two eyes. As a result, specular reflections pose substantial problems for binocular stereopsis. Here we use a combination of computer graphics and geometrical analysis to characterize the key respects in which specular stereo differs from standard stereo, to identify how and why the human visual system fails to reconstruct depths correctly from specular reflections. We describe rendering of stereoscopic images of specular surfaces in which the disparity information can be varied parametrically and independently of monocular appearance. Using the generated surfaces and images, we explain how stereo correspondence can be established with known and unknown surface geometry. We show that even with known geometry, stereo matching for specular surfaces is nontrivial because points in one eye may have zero, one, or multiple matches in the other eye. Matching features typically yield skew (nonintersecting) rays, leading to substantial ortho-epipolar components to the disparities, which makes deriving depth values from matches nontrivial. We suggest that the human visual system may base its depth estimates solely on the epipolar components of disparities while treating the ortho-epipolar components as a measure of the underlying reliability of the disparity signals. Reconstructing virtual surfaces according to these principles reveals that they are piece-wise smooth with very large discontinuities close to inflection points on the physical surface. Together, these distinctive characteristics lead to cues that the visual system could use to diagnose specular reflections from binocular information. PMID:25540263

  8. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    PubMed

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  9. MIT-Skywalker: On the use of a markerless system.

    PubMed

    Goncalves, Rogerio S; Hamilton, Taya; Krebs, Hermano I

    2017-07-01

    This paper describes our efforts to employ the Microsoft Kinect as a low cost vision control system for the MIT-Skywalker, a robotic gait rehabilitation device. The Kinect enables an alternative markerless solution to control the MIT-Skywalker and allows a more user-friendly set-up. A study involving eight healthy subjects and two stroke survivors using the MIT-Skywalker device demonstrates the advantages and challenges of this new proposed approach.

  10. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  11. The contribution of stereo vision to the control of braking.

    PubMed

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  12. Quantitative evaluation of 3D mouse behaviors and motor function in the open-field after spinal cord injury using markerless motion tracking.

    PubMed

    Sheets, Alison L; Lai, Po-Lun; Fisher, Lesley C; Basso, D Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study's goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal's silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal's front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement methods, subjectivity and human error is reduced, potentially providing insights leading to breakthroughs in treating human disease.

  13. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement methods, subjectivity and human error is reduced, potentially providing insights leading to breakthroughs in treating human disease. PMID:24058586

  14. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  15. Molecular evidence of stereo-specific lactoferrin dimers in solution.

    PubMed

    Persson, Björn A; Lund, Mikael; Forsman, Jan; Chatterton, Dereck E W; Akesson, Torbjörn

    2010-10-01

    Gathering experimental evidence suggests that bovine as well as human lactoferrin self-associate in aqueous solution. Still, a molecular level explanation is unavailable. Using force field based molecular modeling of the protein-protein interaction free energy we demonstrate (1) that lactoferrin forms highly stereo-specific dimers at neutral pH and (2) that the self-association is driven by a high charge complementarity across the contact surface of the proteins. Our theoretical predictions of dimer formation are verified by electrophoretic mobility and N-terminal sequence analysis on bovine lactoferrin. 2010 Elsevier B.V. All rights reserved.

  16. New Data Products and Tools from STEREO's IMPACT Investigation

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Li, Y.; Huttunen, E.; Toy, V.; Russell, C. T.; Davis, A.

    2008-05-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The IMPACT team has developed both human-friendly web portals and API's (application program interfaces) which provide straight- foward data access to scientists and the Virtual Observatories. Several new data products and web features have been developed including the release of more and higher-level data products, a library of TPLOT-based IDL data analysis routines, and web sites devoted to event analysis and inter-mission and inter-disciplinary collaboration. A web browser integrating STEREO and L1 in situ data with imaging and models provides a powerful venue for exploring the Sun-Earth connection. These latest releases enable the larger scientific community to use STEREO's unique in-situ data in novel ways along side "third eye" data sets from Wind, ACE and other heliospheric missions contributing to a better understanding of three dimensional structures in the solar wind.

  17. People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments

    NASA Astrophysics Data System (ADS)

    Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.

    People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.

  18. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  19. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  20. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    PubMed

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  1. Fast human pose estimation using 3D Zernike descriptors

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  2. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  3. Isomap transform for segmenting human body shapes.

    PubMed

    Cerveri, P; Sarro, K J; Marchente, M; Barros, R M L

    2011-09-01

    Segmentation of the 3D human body is a very challenging problem in applications exploiting volume capture data. Direct clustering in the Euclidean space is usually complex or even unsolvable. This paper presents an original method based on the Isomap (isometric feature mapping) transform of the volume data-set. The 3D articulated posture is mapped by Isomap in the pose of Da Vinci's Vitruvian man. The limbs are unrolled from each other and separated from the trunk and pelvis, and the topology of the human body shape is recovered. In such a configuration, Hoshen-Kopelman clustering applied to concentric spherical shells is used to automatically group points into the labelled principal curves. Shepard interpolation is utilised to back-map points of the principal curves into the original volume space. The experimental results performed on many different postures have proved the validity of the proposed method. Reliability of less than 2 cm and 3° in the location of the joint centres and direction axes of rotations has been obtained, respectively, which qualifies this procedure as a potential tool for markerless motion analysis.

  4. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  5. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  6. Markerless motion estimation for motion-compensated clinical brain imaging

    NASA Astrophysics Data System (ADS)

    Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.

    2018-05-01

    Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.

  7. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  8. Expanding the CRISPR/Cas9 toolkit for Pichia pastoris with efficient donor integration and alternative resistance markers.

    PubMed

    Weninger, Astrid; Fischer, Jasmin E; Raschmanová, Hana; Kniely, Claudia; Vogl, Thomas; Glieder, Anton

    2018-04-01

    Komagataella phaffii (syn. Pichia pastoris) is one of the most commonly used host systems for recombinant protein expression. Achieving targeted genetic modifications had been hindered by low frequencies of homologous recombination (HR). Recently, a CRISPR/Cas9 genome editing system has been implemented for P. pastoris enabling gene knockouts based on indels (insertion, deletions) via non-homologous end joining (NHEJ) at near 100% efficiency. However, specifically integrating homologous donor cassettes via HR for replacement studies had proven difficult resulting at most in ∼20% correct integration using CRISPR/Cas9. Here, we demonstrate the CRISPR/Cas9 mediated integration of markerless donor cassettes at efficiencies approaching 100% using a ku70 deletion strain. The Ku70p is involved in NHEJ repair and lack of the protein appears to favor repair via HR near exclusively. While the absolute number of transformants in the Δku70 strain is reduced, virtually all surviving transformants showed correct integration. In the wildtype strain, markerless donor cassette integration was also improved up to 25-fold by placing an autonomously replicating sequence (ARS) on the donor cassette. Alternative strategies for improving donor cassette integration using a Cas9 nickase variant or reducing off targeting associated toxicity using a high fidelity Cas9 variant were so far not successful in our hands in P. pastoris. Furthermore we provide Cas9/gRNA expression plasmids with a Geneticin resistance marker which proved to be versatile tools for marker recycling. The reported CRSIPR-Cas9 tools can be applied for modifying existing production strains and also pave the way for markerless whole genome modification studies in P. pastoris. © 2017 The Authors. Journal of Cellular Biochemistry Published by Wiley Periodicals, Inc.

  9. Expanding the CRISPR/Cas9 toolkit for Pichia pastoris with efficient donor integration and alternative resistance markers

    PubMed Central

    Weninger, Astrid; Fischer, Jasmin E.; Raschmanová, Hana; Kniely, Claudia; Glieder, Anton

    2017-01-01

    Abstract Komagataella phaffii (syn. Pichia pastoris) is one of the most commonly used host systems for recombinant protein expression. Achieving targeted genetic modifications had been hindered by low frequencies of homologous recombination (HR). Recently, a CRISPR/Cas9 genome editing system has been implemented for P. pastoris enabling gene knockouts based on indels (insertion, deletions) via non‐homologous end joining (NHEJ) at near 100% efficiency. However, specifically integrating homologous donor cassettes via HR for replacement studies had proven difficult resulting at most in ∼20% correct integration using CRISPR/Cas9. Here, we demonstrate the CRISPR/Cas9 mediated integration of markerless donor cassettes at efficiencies approaching 100% using a ku70 deletion strain. The Ku70p is involved in NHEJ repair and lack of the protein appears to favor repair via HR near exclusively. While the absolute number of transformants in the Δku70 strain is reduced, virtually all surviving transformants showed correct integration. In the wildtype strain, markerless donor cassette integration was also improved up to 25‐fold by placing an autonomously replicating sequence (ARS) on the donor cassette. Alternative strategies for improving donor cassette integration using a Cas9 nickase variant or reducing off targeting associated toxicity using a high fidelity Cas9 variant were so far not successful in our hands in P. pastoris. Furthermore we provide Cas9/gRNA expression plasmids with a Geneticin resistance marker which proved to be versatile tools for marker recycling. The reported CRSIPR‐Cas9 tools can be applied for modifying existing production strains and also pave the way for markerless whole genome modification studies in P. pastoris. PMID:29091307

  10. Development and Long-Term Verification of Stereo Vision Sensor System for Controlling Safety at Railroad Crossing

    NASA Astrophysics Data System (ADS)

    Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko

    Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.

  11. Efficiency of extracting stereo-driven object motions

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2013-01-01

    Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues. PMID:23325345

  12. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    PubMed Central

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  13. Analysis and design of stereoscopic display in stereo television endoscope system

    NASA Astrophysics Data System (ADS)

    Feng, Dawei

    2008-12-01

    Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.

  14. Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Ji, S.; Zhang, C.; Qin, Z.

    2018-05-01

    Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.

  15. The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow.

    PubMed

    Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A

    2016-07-01

    When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.

  16. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  17. Stereo study as an aid to visual analysis of ERTS and Skylab images

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The parallax on ERTS and Skylab images is sufficiently large for exploitation by human photointerpreters. The ability to view the imagery stereoscopically reduces the signal-to-noise ratio. Stereoscopic examination of orbital data can contribute to studies of spatial, spectral, and temporal variations on the imagery. The combination of true stereo parallax, plus shadow parallax offer many possibilities to human interpreters for making meaningful analyses of orbital imagery.

  18. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  19. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  20. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  1. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  2. Dual CRISPR-Cas9 Cleavage Mediated Gene Excision and Targeted Integration in Yarrowia lipolytica.

    PubMed

    Gao, Difeng; Smith, Spencer; Spagnuolo, Michael; Rodriguez, Gabriel; Blenner, Mark

    2018-05-29

    CRISPR-Cas9 technology has been successfully applied in Yarrowia lipolytica for targeted genomic editing including gene disruption and integration; however, disruptions by existing methods typically result from small frameshift mutations caused by indels within the coding region, which usually resulted in unnatural protein. In this study, a dual cleavage strategy directed by paired sgRNAs is developed for gene knockout. This method allows fast and robust gene excision, demonstrated on six genes of interest. The targeted regions for excision vary in length from 0.3 kb up to 3.5 kb and contain both non-coding and coding regions. The majority of the gene excisions are repaired by perfect nonhomologous end-joining without indel. Based on this dual cleavage system, two targeted markerless integration methods are developed by providing repair templates. While both strategies are effective, homology mediated end joining (HMEJ) based method are twice as efficient as homology recombination (HR) based method. In both cases, dual cleavage leads to similar or improved gene integration efficiencies compared to gene excision without integration. This dual cleavage strategy will be useful for not only generating more predictable and robust gene knockout, but also for efficient targeted markerless integration, and simultaneous knockout and integration in Y. lipolytica. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Establishment of Stereo Multi-sensor Network for Giant Landslide Monitoring and its Deploy in Xishan landslide, Sichuan, China.

    NASA Astrophysics Data System (ADS)

    Liu, C.; Lu, P.; WU, H.

    2015-12-01

    Landslide is one of the most destructive natural disasters, which severely affects human lives as well as the safety of personal properties and public infrastructures. Monitoring and predicting landslide movements can keep an adequate safety level for human beings in those situations. This paper indicated a newly developed Stereo Multi-sensor Landslide Monitoring Network (SMSLMN) based on a uniform temporal geo-reference. Actually, early in 2003, SAMOA (Surveillance et Auscultation des Mouvements de Terrain Alpins, French) project was put forwarded as a plan for landslide movements monitoring. However, SAMOA project did not establish a stereo observation network to fully cover the surface and internal part of landslide. SMSLMN integrated various sensors, including space-borne, airborne, in-situ and underground sensors, which can quantitatively monitor the slide-body and obtain portent information of movement in high frequency with high resolution. The whole network has been deployed at the Xishan landslide, Sichuan, P.R.China. According to various characteristic of stereo monitoring sensors, observation capabilities indicators for different sensors were proposed in order to obtain the optimal sensors combination groups and observation strategy. Meanwhile, adaptive networking and reliable data communication methods were developed to apply intelligent observation and sensor data transmission. Some key technologies, such as signal amplification and intelligence extraction technology, data access frequency adaptive adjustment technology, different sensor synchronization control technology were developed to overcome the problems in complex observation environment. The collaboratively observation data have been transferred to the remote data center where is thousands miles away from the giant landslide spot. These data were introduced into the landslide stability analysis model, and some primary conclusion will be achieved at the end of paper.

  4. CRISPR/Cas9 mediated targeted mutagenesis of the fast growing cyanobacterium Synechococcus elongatus UTEX 2973.

    PubMed

    Wendt, Kristen E; Ungerer, Justin; Cobb, Ryan E; Zhao, Huimin; Pakrasi, Himadri B

    2016-06-23

    As autotrophic prokaryotes, cyanobacteria are ideal chassis organisms for sustainable production of various useful compounds. The newly characterized cyanobacterium Synechococcus elongatus UTEX 2973 is a promising candidate for serving as a microbial cell factory because of its unusually rapid growth rate. Here, we seek to develop a genetic toolkit that enables extensive genomic engineering of Synechococcus 2973 by implementing a CRISPR/Cas9 editing system. We targeted the nblA gene because of its important role in biological response to nitrogen deprivation conditions. First, we determined that the Streptococcus pyogenes Cas9 enzyme is toxic in cyanobacteria, and conjugational transfer of stable, replicating constructs containing the cas9 gene resulted in lethality. However, after switching to a vector that permitted transient expression of the cas9 gene, we achieved markerless editing in 100 % of cyanobacterial exconjugants after the first patch. Moreover, we could readily cure the organisms of antibiotic resistance, resulting in a markerless deletion strain. High expression levels of the Cas9 protein in Synechococcus 2973 appear to be toxic and result in cell death. However, introduction of a CRISPR/Cas9 genome editing system on a plasmid backbone that leads to transient cas9 expression allowed for efficient markerless genome editing in a wild type genetic background.

  5. CRISPR/Cas9 mediated targeted mutagenesis of the fast growing cyanobacterium Synechococcus elongatus UTEX 2973

    DOE PAGES

    Wendt, Kristen E.; Ungerer, Justin; Cobb, Ryan E.; ...

    2016-06-23

    As autotrophic prokaryotes, cyanobacteria are ideal chassis organisms for sustainable production of various useful compounds. The newly characterized cyanobacterium Synechococcus elongatus UTEX 2973 is a promising candidate for serving as a microbial cell factory because of its unusually rapid growth rate. Here, we seek to develop a genetic toolkit that enables extensive genomic engineering of Synechococcus 2973 by implementing a CRISPR/Cas9 editing system. We targeted the nblA gene because of its important role in biological response to nitrogen deprivation conditions. First, we determined that the Streptococcus pyogenes Cas9 enzyme is toxic in cyanobacteria, and conjugational transfer of stable, replicating constructsmore » containing the cas9 gene resulted in lethality. However, after switching to a vector that permitted transient expression of the cas9 gene, we achieved markerless editing in 100 % of cyanobacterial exconjugants after the first patch. Moreover, we could readily cure the organisms of antibiotic resistance, resulting in a markerless deletion strain. In conclusion, high expression levels of the Cas9 protein in Synechococcus 2973 appear to be toxic and result in cell death. However, introduction of a CRISPR/Cas9 genome editing system on a plasmid backbone that leads to transient cas9 expression allowed for efficient markerless genome editing in a wild type genetic background.« less

  6. CRISPR/Cas9 mediated targeted mutagenesis of the fast growing cyanobacterium Synechococcus elongatus UTEX 2973

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Kristen E.; Ungerer, Justin; Cobb, Ryan E.

    As autotrophic prokaryotes, cyanobacteria are ideal chassis organisms for sustainable production of various useful compounds. The newly characterized cyanobacterium Synechococcus elongatus UTEX 2973 is a promising candidate for serving as a microbial cell factory because of its unusually rapid growth rate. Here, we seek to develop a genetic toolkit that enables extensive genomic engineering of Synechococcus 2973 by implementing a CRISPR/Cas9 editing system. We targeted the nblA gene because of its important role in biological response to nitrogen deprivation conditions. First, we determined that the Streptococcus pyogenes Cas9 enzyme is toxic in cyanobacteria, and conjugational transfer of stable, replicating constructsmore » containing the cas9 gene resulted in lethality. However, after switching to a vector that permitted transient expression of the cas9 gene, we achieved markerless editing in 100 % of cyanobacterial exconjugants after the first patch. Moreover, we could readily cure the organisms of antibiotic resistance, resulting in a markerless deletion strain. In conclusion, high expression levels of the Cas9 protein in Synechococcus 2973 appear to be toxic and result in cell death. However, introduction of a CRISPR/Cas9 genome editing system on a plasmid backbone that leads to transient cas9 expression allowed for efficient markerless genome editing in a wild type genetic background.« less

  7. Event-Based Stereo Depth Estimation Using Belief Propagation.

    PubMed

    Xie, Zhen; Chen, Shengyong; Orchard, Garrick

    2017-01-01

    Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.

  8. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    NASA Astrophysics Data System (ADS)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  9. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  10. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  11. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  12. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    NASA Astrophysics Data System (ADS)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  13. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors

    PubMed Central

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-01-01

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028

  14. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    PubMed

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  15. Deblocking of mobile stereo video

    NASA Astrophysics Data System (ADS)

    Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen

    2012-02-01

    Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.

  16. MEDIASSIST: medical assistance for intraoperative skill transfer in minimally invasive surgery using augmented reality

    NASA Astrophysics Data System (ADS)

    Sudra, Gunther; Speidel, Stefanie; Fritz, Dominik; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2007-03-01

    Minimally invasive surgery is a highly complex medical discipline with various risks for surgeon and patient, but has also numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate with these new problems, we propose to support the surgeon's spatial cognition by using augmented reality (AR) techniques to directly visualize virtual objects in the surgical site. In order to generate an intelligent support, it is necessary to have an intraoperative assistance system that recognizes the surgical skills during the intervention and provides context-aware assistance surgeon using AR techniques. With MEDIASSIST we bundle our research activities in the field of intraoperative intelligent support and visualization. Our experimental setup consists of a stereo endoscope, an optical tracking system and a head-mounted-display for 3D visualization. The framework will be used as platform for the development and evaluation of our research in the field of skill recognition and context-aware assistance generation. This includes methods for surgical skill analysis, skill classification, context interpretation as well as assistive visualization and interaction techniques. In this paper we present the objectives of MEDIASSIST and first results in the fields of skill analysis, visualization and multi-modal interaction. In detail we present a markerless instrument tracking for surgical skill analysis as well as visualization techniques and recognition of interaction gestures in an AR environment.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, T; Kearney, V; Liu, H

    Purpose: Dynamic tumor tracking or motion compensation techniques have proposed to modify beam delivery following lung tumor motion on the flight. Conventional treatment plan QA could be performed in advance since every delivery may be different. Markerless lung tumor tracking using beams eye view EPID images provides a best treatment evaluation mechanism. The purpose of this study is to improve the accuracy of the online markerless lung tumor motion tracking method. Methods: The lung tumor could be located on every frame of MV images during radiation therapy treatment by comparing with corresponding digitally reconstructed radiograph (DRR). A kV-MV CT correspondingmore » curve is applied on planning kV CT to generate MV CT images for patients in order to enhance the similarity between DRRs and MV treatment images. This kV-MV CT corresponding curve was obtained by scanning a same CT electron density phantom by a kV CT scanner and MV scanner (Tomotherapy) or MV CBCT. Two sets of MV DRRs were then generated for tumor and anatomy without tumor as the references to tracking the tumor on beams eye view EPID images. Results: Phantom studies were performed on a Varian TrueBeam linac. MV treatment images were acquired continuously during each treatment beam delivery at 12 gantry angles by iTools. Markerless tumor tracking was applied with DRRs generated from simulated MVCT. Tumors were tracked on every frame of images and compared with expected positions based on programed phantom motion. It was found that the average tracking error were 2.3 mm. Conclusion: This algorithm is capable of detecting lung tumors at complicated environment without implanting markers. It should be noted that the CT data has a slice thickness of 3 mm. This shows the statistical accuracy is better than the spatial accuracy. This project has been supported by a Varian Research Grant.« less

  18. SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, R; Block, A; Harkenrider, M

    2015-06-15

    Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less

  19. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  20. Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC

    NASA Astrophysics Data System (ADS)

    Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.

  1. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  2. Stereo vision and strabismus

    PubMed Central

    Read, J C A

    2015-01-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements. PMID:25475234

  3. Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles

    PubMed Central

    Mou, Xiaozheng; Wang, Han

    2018-01-01

    This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293

  4. Human Stereopsis is not Limited by the Optics of the Well-focused Eye

    PubMed Central

    Vlaskamp, Björn N.S.; Yoon, Geunyoung; Banks, Martin S.

    2011-01-01

    Human stereopsis—the perception of depth from differences in the two eyes’ images—is very precise: Image differences smaller than a single photoreceptor can be converted into a perceived difference in depth. To better understand what determines this precision, we examined how the eyes’ optics affects stereo resolution. We did this by comparing performance with normal, well-focused optics and with optics improved by eliminating chromatic aberration and correcting higher-order aberrations. We first measured luminance contrast sensitivity in both eyes and showed that we had indeed improved optical quality significantly. We then measured stereo resolution in two ways: by finding the finest corrugation in depth that one can perceive, and by finding the smallest disparity one can perceive as different from zero. Our optical manipulation had no effect on stereo performance. We checked this by redoing the experiments at low contrast and again found no effect of improving optical quality. Thus, the resolution of human stereopsis is not limited by the optics of the well-focused eye. We discuss the implications of this remarkable finding. PMID:21734272

  5. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  6. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  7. Coronal hole evolution from multi-viewpoint data as input for a STEREO solar wind speed persistence model

    NASA Astrophysics Data System (ADS)

    Temmer, Manuela; Hinterreiter, Jürgen; Reiss, Martin A.

    2018-03-01

    We present a concept study of a solar wind forecasting method for Earth, based on persistence modeling from STEREO in situ measurements combined with multi-viewpoint EUV observational data. By comparing the fractional areas of coronal holes (CHs) extracted from EUV data of STEREO and SoHO/SDO, we perform an uncertainty assessment derived from changes in the CHs and apply those changes to the predicted solar wind speed profile at 1 AU. We evaluate the method for the time period 2008-2012, and compare the results to a persistence model based on ACE in situ measurements and to the STEREO persistence model without implementing the information on CH evolution. Compared to an ACE based persistence model, the performance of the STEREO persistence model which takes into account the evolution of CHs, is able to increase the number of correctly predicted high-speed streams by about 12%, and to decrease the number of missed streams by about 23%, and the number of false alarms by about 19%. However, the added information on CH evolution is not able to deliver more accurate speed values for the forecast than using the STEREO persistence model without CH information which performs better than an ACE based persistence model. Investigating the CH evolution between STEREO and Earth view for varying separation angles over ˜25-140° East of Earth, we derive some relation between expanding CHs and increasing solar wind speed, but a less clear relation for decaying CHs and decreasing solar wind speed. This fact most likely prevents the method from making more precise forecasts. The obtained results support a future L5 mission and show the importance and valuable contribution using multi-viewpoint data.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poels, Kenneth, E-mail: kenneth.poels@uzbrussel.be; Verellen, Dirk; Van de Vondel, Iwein

    Purpose: Because frame rates on current clinical available electronic portal imaging devices (EPID’s) are limited to 7.5 Hz, a new commercially available PerkinElmer EPID (XRD 1642 AP19) with a maximum frame rate of 30 Hz and a new scintillator (Kyokko PI200) with improved sensitivity (light output) for megavolt (MV) irradiation was evaluated. In this work, the influence of MV pulse artifacts and pulsing artifact suppression techniques on fiducial marker and marker-less detection of a lung lesion was investigated, because target localization is an important component of uncertainty in geometrical verification of real-time tumor tracking. Methods: Visicoil™ markers with a diametermore » of 0.05 and 0.075 cm were used for MV marker tracking with a frame rate of, respectively, 7.5, 15, and 30 Hz. A 30 Hz readout of the detector was obtained by a 2 × 2 pixel binning, reducing spatial resolution. Static marker detection was conducted in function of increasing phantom thickness. Additionally, marker-less tracking was conducted and compared with the ground-truth fiducial marker motion. Performance of MV target detection was investigated by comparing the least-square sine wave fit of the detected marker positions with the predefined sine wave motion. For fiducial marker detection, a Laplacian-of-Gaussian enhancement was applied after which normalized cross correlation was used to find the most probable marker position. Marker-less detection was performed by using the scale and orientation adaptive mean shift tracking algorithm. For each MV fluoroscopy, a free running (FR-nF) (ignoring MV pulsing during readout) acquisition mode was compared with two acquisition modes intending to reduce MV pulsing artifacts, i.e., combined wavelet-FFT filtering (FR-wF) and electronic readout synchronized with respect to MV pulses. Results: A 0.05 cm Visicoil marker resulted in an unacceptable root-mean square error (RMSE) > 0.2 cm with a maximum frame rate of 30 Hz during FR-nF readout. With a 30 Hz synchronized readout (S-nF) and during 15 Hz readout (independent of readout mode), RMSE was submillimeter for a static 0.05 cm Visicoil. A dynamic 0.05 cm Visicoil was not detectable on the XRD 1642 AP19, despite a fast synchronized readout. For a 0.075 cm Visicoil, deviations of sine wave motion were submillimeter (RMSE < 0.08 cm), independent of the acquisition mode (FR, S). For marker-less tumor detection, FR-nF images resulted in RMSE > 0.3 cm, while for MV fluoroscopy in S-mode RMSE < 0.1 cm for 15 Hz and RMSE < 0.16 cm for 30 Hz. Largest consistency in target localization was experienced during 15 Hz S-nF readout. Conclusions: In general, marker contrast decreased in function of higher frame rates, which was detrimental for marker detection success. In this work, Visicoils with a thickness of 0.075 cm were showing best results for a 15 Hz frame rate, while non-MV compatible 0.05 cm Visicoil markers were not visible on the new EPID with improved sensitivity compared to EPID models based on a Kodak Lanex Fast scintillator. No noticeable influence of pulsing artifacts on the detection of a 0.075 cm Visicoil was observed, while a synchronized readout provided most reliable detection of a marker-less soft-tissue structure.« less

  9. Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Marchant, W.

    2011-12-01

    The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.

  10. Image-based tracking of the suturing needle during laparoscopic interventions

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2015-03-01

    One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.

  11. ROS-based ground stereo vision detection: implementation and experiments.

    PubMed

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  12. A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.

    PubMed

    Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao

    2016-01-01

    In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing pharmacological interventions.

  13. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  14. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  15. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  16. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  17. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".

  18. SU-E-J-26: A Novel Technique for Markerless Self-Sorted 4D-CBCT Using Patient Motion Modeling: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Zhang, Y; Harris, W

    2015-06-15

    Purpose: To develop an automatic markerless 4D-CBCT projection sorting technique by using a patient respiratory motion model extracted from the planning 4D-CT images. Methods: Each phase of onboard 4D-CBCT is considered as a deformation of one phase of the prior planning 4D-CT. The deformation field map (DFM) is represented as a linear combination of three major deformation patterns extracted from the planning 4D-CT using principle component analysis (PCA). The coefficients of the PCA deformation patterns are solved by matching the digitally reconstructed radiograph (DRR) of the deformed volume to the onboard projection acquired. The PCA coefficients are solved for eachmore » single projection, and are used for phase sorting. Projections at the peaks of the Z direction coefficient are sorted as phase 1 and other projections are assigned into 10 phase bins by dividing phases equally between peaks. The 4D digital extended-cardiac-torso (XCAT) phantom was used to evaluate the proposed technique. Three scenarios were simulated, with different tumor motion amplitude (3cm to 2cm), tumor spatial shift (8mm SI), and tumor body motion phase shift (2 phases) from prior to on-board images. Projections were simulated over 180 degree scan-angle for the 4D-XCAT. The percentage of accurately binned projections across entire dataset was calculated to represent the phase sorting accuracy. Results: With a changed tumor motion amplitude from 3cm to 2cm, markerless phase sorting accuracy was 100%. With a tumor phase shift of 2 phases w.r.t. body motion, the phase sorting accuracy was 100%. With a tumor spatial shift of 8mm in SI direction, phase sorting accuracy was 86.1%. Conclusion: The XCAT phantom simulation results demonstrated that it is feasible to use prior knowledge and motion modeling technique to achieve markerless 4D-CBCT phase sorting. National Institutes of Health Grant No. R01-CA184173 Varian Medical System.« less

  19. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  20. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  1. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  2. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  3. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  4. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2006-12-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Also, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross-spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  5. Markerless Knee Joint Position Measurement Using Depth Data during Stair Walking

    PubMed Central

    Mita, Akira; Yorozu, Ayanori; Takahashi, Masaki

    2017-01-01

    Climbing and descending stairs are demanding daily activities, and the monitoring of them may reveal the presence of musculoskeletal diseases at an early stage. A markerless system is needed to monitor such stair walking activity without mentally or physically disturbing the subject. Microsoft Kinect v2 has been used for gait monitoring, as it provides a markerless skeleton tracking function. However, few studies have used this device for stair walking monitoring, and the accuracy of its skeleton tracking function during stair walking has not been evaluated. Moreover, skeleton tracking is not likely to be suitable for estimating body joints during stair walking, as the form of the body is different from what it is when it walks on level surfaces. In this study, a new method of estimating the 3D position of the knee joint was devised that uses the depth data of Kinect v2. The accuracy of this method was compared with that of the skeleton tracking function of Kinect v2 by simultaneously measuring subjects with a 3D motion capture system. The depth data method was found to be more accurate than skeleton tracking. The mean error of the 3D Euclidian distance of the depth data method was 43.2 ± 27.5 mm, while that of the skeleton tracking was 50.4 ± 23.9 mm. This method indicates the possibility of stair walking monitoring for the early discovery of musculoskeletal diseases. PMID:29165396

  6. Possibilities and limitations of current stereo-endoscopy.

    PubMed

    Mueller-Richter, U D A; Limberger, A; Weber, P; Ruprecht, K W; Spitzer, W; Schilling, M

    2004-06-01

    Stereo-endoscopy has become a commonly used technology. In many comparative studies striking advantages of stereo-endoscopy over two-dimensional presentation could not be proven. To show the potential and fields for further improvement of this technology is the aim of this article. The physiological basis of three-dimensional vision limitations of current stereo-endoscopes is discussed and fields for further research are indicated. New developments in spatial picture acquisition and spatial picture presentation are discussed. Current limitations of stereo-endoscopy that prevent a better ranking in comparative studies with two-dimensional presentation are mainly based on insufficient picture acquisition. Devices for three-dimensional picture presentation are at a more advanced developmental stage than devices for three-dimensional picture acquisition. Further research should emphasize the development of new devices for three-dimensional picture acquisition.

  7. Cloud photogrammetry with dense stereo for fisheye cameras

    NASA Astrophysics Data System (ADS)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  8. Fluoroscopic tumor tracking for image-guided lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Lin, Tong; Cerviño, Laura I.; Tang, Xiaoli; Vasconcelos, Nuno; Jiang, Steve B.

    2009-02-01

    Accurate lung tumor tracking in real time is a keystone to image-guided radiotherapy of lung cancers. Existing lung tumor tracking approaches can be roughly grouped into three categories: (1) deriving tumor position from external surrogates; (2) tracking implanted fiducial markers fluoroscopically or electromagnetically; (3) fluoroscopically tracking lung tumor without implanted fiducial markers. The first approach suffers from insufficient accuracy, while the second may not be widely accepted due to the risk of pneumothorax. Previous studies in fluoroscopic markerless tracking are mainly based on template matching methods, which may fail when the tumor boundary is unclear in fluoroscopic images. In this paper we propose a novel markerless tumor tracking algorithm, which employs the correlation between the tumor position and surrogate anatomic features in the image. The positions of the surrogate features are not directly tracked; instead, we use principal component analysis of regions of interest containing them to obtain parametric representations of their motion patterns. Then, the tumor position can be predicted from the parametric representations of surrogates through regression. Four regression methods were tested in this study: linear and two-degree polynomial regression, artificial neural network (ANN) and support vector machine (SVM). The experimental results based on fluoroscopic sequences of ten lung cancer patients demonstrate a mean tracking error of 2.1 pixels and a maximum error at a 95% confidence level of 4.6 pixels (pixel size is about 0.5 mm) for the proposed tracking algorithm.

  9. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  10. An assembly system based on industrial robot with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  11. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    PubMed

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.

  12. Stereographic observations from geosynchronous satellites - An important new tool for the atmospheric sciences

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    1981-01-01

    Observations of cloud geometry using scan-synchronized stereo geostationary satellites having images with horizontal spatial resolution of approximately 0.5 km, and temporal resolution of up to 3 min are presented. The stereo does not require a cloud with known emissivity to be in equilibrium with an atmosphere with a known vertical temperature profile. It is shown that absolute accuracies of about 0.5 km are possible. Qualitative and quantitative representations of atmospheric dynamics were shown by remapping, display, and stereo image analysis on an interactive computer/imaging system. Applications of stereo observations include: (1) cloud top height contours of severe thunderstorms and hurricanes, (2) cloud top and base height estimates for cloud-wind height assignment, (3) cloud growth measurements for severe thunderstorm over-shooting towers, (4) atmospheric temperature from stereo heights and infrared cloud top temperatures, and (5) cloud emissivity estimation. Recommendations are given for future improvements in stereo observations, including a third GOES satellite, operational scan synchronization of all GOES satellites and better resolution sensors.

  13. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  14. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two

  15. Stereo- and regio-selective one-pot synthesis of triazole-based unnatural amino acids and β- amino triazoles

    EPA Science Inventory

    Synthesis of triazole based unnatural amino acids and β-amino triazole has been described via stereo and regioselective one-pot multi-component reaction of sulfamidates, sodium azide, and alkynes under MW conditions. The developed method is applicable to a broad substrate scope a...

  16. SU-G-JeP1-11: Feasibility Study of Markerless Tracking Using Dual Energy Fluoroscopic Images for Real-Time Tumor-Tracking Radiotherapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiinoki, T; Shibuya, K; Sawada, A

    Purpose: The new real-time tumor-tracking radiotherapy (RTRT) system was installed in our institution. This system consists of two x-ray tubes and color image intensifiers (I.I.s). The fiducial marker which was implanted near the tumor was tracked using color fluoroscopic images. However, the implantation of the fiducial marker is very invasive. Color fluoroscopic images enable to increase the recognition of the tumor. However, these images were not suitable to track the tumor without fiducial marker. The purpose of this study was to investigate the feasibility of markerless tracking using dual energy colored fluoroscopic images for real-time tumor-tracking radiotherapy system. Methods: Themore » colored fluoroscopic images of static and moving phantom that had the simulated tumor (30 mm diameter sphere) were experimentally acquired using the RTRT system. The programmable respiratory motion phantom was driven using the sinusoidal pattern in cranio-caudal direction (Amplitude: 20 mm, Time: 4 s). The x-ray condition was set to 55 kV, 50 mA and 105 kV, 50 mA for low energy and high energy, respectively. Dual energy images were calculated based on the weighted logarithmic subtraction of high and low energy images of RGB images. The usefulness of dual energy imaging for real-time tracking with an automated template image matching algorithm was investigated. Results: Our proposed dual energy subtraction improve the contrast between tumor and background to suppress the bone structure. For static phantom, our results showed that high tracking accuracy using dual energy subtraction images. For moving phantom, our results showed that good tracking accuracy using dual energy subtraction images. However, tracking accuracy was dependent on tumor position, tumor size and x-ray conditions. Conclusion: We indicated that feasibility of markerless tracking using dual energy fluoroscopic images for real-time tumor-tracking radiotherapy system. Furthermore, it is needed to investigate the tracking accuracy using proposed dual energy subtraction images for clinical cases.« less

  17. TH-AB-202-01: Daily Lung Tumor Motion Characterization On EPIDs Using a Markerless Tiling Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozario, T; University of Texas at Dallas, Richardson, TX; Chiu, T

    Purpose: Tracking lung tumor motion in real time allows for target dose escalation while simultaneously reducing dose to sensitive structures, thus increasing local control without increasing toxicity. We present a novel intra-fractional markerless lung tumor tracking algorithm using MV treatment beam images acquired during treatment delivery. Strong signals superimposed on the tumor significantly reduced the soft tissue resolution; while different imaging modalities involved introduce global imaging discrepancies. This reduced the comparison accuracies. A simple yet elegant Tiling algorithm is reported to overcome the aforementioned issues. Methods: MV treatment beam images were acquired continuously in beam’s eye view (BEV) by anmore » electronic portal imaging device (EPID) during treatment and analyzed to obtain tumor positions on every frame. Every frame of the MV image was simulated by a composite of two components with separate digitally reconstructed radiographs (DRRs): all non-moving structures and the tumor. This Titling algorithm divides the global composite DRR and the corresponding MV projection into sub-images called tiles. Rigid registration is performed independently on tile-pairs in order to improve local soft tissue resolution. This enables the composite DRR to be transformed accurately to match the MV projection and attain a high correlation value through a pixel-based linear transformation. The highest cumulative correlation for all tile-pairs achieved over a user-defined search range indicates the 2-D coordinates of the tumor location on the MV projection. Results: This algorithm was successfully applied to cine-mode BEV images acquired during two SBRT plans delivered five times with different motion patterns to each of two phantoms. Approximately 15000 beam’s eye view images were analyzed and tumor locations were successfully identified on every projection with a maximum/average error of 1.8 mm / 1.0 mm. Conclusion: Despite the presence of strong anatomical signal overlapping with tumor images, this markerless detection algorithm accurately tracks intrafractional lung tumor motions. This project is partially supported by an Elekta research grant.« less

  18. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  19. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  20. System Design, Calibration and Performance Analysis of a Novel 360° Stereo Panoramic Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Blaser, S.; Nebiker, S.; Cavegn, S.

    2017-05-01

    Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.

  1. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  2. Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems

    NASA Astrophysics Data System (ADS)

    Mark, David S.; Waste, Corby

    1997-05-01

    The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.

  3. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study

    PubMed Central

    2017-01-01

    The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. PMID:29022728

  4. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    PubMed

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  7. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  8. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  9. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  10. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  11. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  12. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  13. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  14. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  15. Burns Cliff in Color Stereo

    NASA Image and Video Library

    2006-07-10

    NASA Mars Exploration Rover Opportunity captured a sweeping stereo image of Burns Cliff after driving right to the base of this southeastern portion of the inner wall of Endurance Crater in November 2004. 3D glasses are necessary to view this image.

  16. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  17. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  18. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2007-05-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Static summary plots and a key-parameter type data set with a related online browser provide alternative data access. Finally, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross- spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  19. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  20. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  1. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  2. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  3. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panfil, J; Patel, R; Surucu, M

    Purpose: To compare markerless template-based tracking of lung tumors using dual energy (DE) cone-beam computed tomography (CBCT) projections versus single energy (SE) CBCT projections. Methods: A RANDO chest phantom with a simulated tumor in the upper right lung was used to investigate the effectiveness of tumor tracking using DE and SE CBCT projections. Planar kV projections from CBCT acquisitions were captured at 60 kVp (4 mAs) and 120 kVp (1 mAs) using the Varian TrueBeam and non-commercial iTools Capture software. Projections were taken at approximately every 0.53° while the gantry rotated. Due to limitations of the phantom, angles for whichmore » the shoulders blocked the tumor were excluded from tracking analysis. DE images were constructed using a weighted logarithmic subtraction that removed bony anatomy while preserving soft tissue structures. The tumors were tracked separately on DE and SE (120 kVp) images using a template-based tracking algorithm. The tracking results were compared to ground truth coordinates designated by a physician. Matches with a distance of greater than 3 mm from ground truth were designated as failing to track. Results: 363 frames were analyzed. The algorithm successfully tracked the tumor on 89.8% (326/363) of DE frames compared to 54.3% (197/363) of SE frames (p<0.0001). Average distance between tracking and ground truth coordinates was 1.27 +/− 0.67 mm for DE versus 1.83+/−0.74 mm for SE (p<0.0001). Conclusion: This study demonstrates the effectiveness of markerless template-based tracking using DE CBCT. DE imaging resulted in better detectability with more accurate localization on average versus SE. Supported by a grant from Varian Medical Systems.« less

  5. Efficient hybrid monocular-stereo approach to on-board video-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo

    2012-01-01

    In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.

  6. Stereo matching using census cost over cross window and segmentation-based disparity refinement

    NASA Astrophysics Data System (ADS)

    Li, Qingwu; Ni, Jinyan; Ma, Yunpeng; Xu, Jinxin

    2018-03-01

    Stereo matching is a vital requirement for many applications, such as three-dimensional (3-D) reconstruction, robot navigation, object detection, and industrial measurement. To improve the practicability of stereo matching, a method using census cost over cross window and segmentation-based disparity refinement is proposed. First, a cross window is obtained using distance difference and intensity similarity in binocular images. Census cost over the cross window and color cost are combined as the matching cost, which is aggregated by the guided filter. Then, winner-takes-all strategy is used to calculate the initial disparities. Second, a graph-based segmentation method is combined with color and edge information to achieve moderate under-segmentation. The segmented regions are classified into reliable regions and unreliable regions by consistency checking. Finally, the two regions are optimized by plane fitting and propagation, respectively, to match the ambiguous pixels. The experimental results are on Middlebury Stereo Datasets, which show that the proposed method has good performance in occluded and discontinuous regions, and it obtains smoother disparity maps with a lower average matching error rate compared with other algorithms.

  7. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  8. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  9. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  10. On-patient see-through augmented reality based on visual SLAM.

    PubMed

    Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M

    2017-01-01

    An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.

  11. Three-camera stereo vision for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.

    1997-02-01

    A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.

  12. What is stereoscopic vision good for?

    NASA Astrophysics Data System (ADS)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  13. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  14. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  15. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  16. Development of a teaching system for an industrial robot using stereo vision

    NASA Astrophysics Data System (ADS)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  17. The effect of decreasing computed tomography dosage on radiostereometric analysis (RSA) accuracy at the glenohumeral joint.

    PubMed

    Fox, Anne-Marie V; Kedgley, Angela E; Lalone, Emily A; Johnson, James A; Athwal, George S; Jenkyn, Thomas R

    2011-11-10

    Standard, beaded radiostereometric analysis (RSA) and markerless RSA often use computed tomography (CT) scans to create three-dimensional (3D) bone models. However, ethical concerns exist due to risks associated with CT radiation exposure. Therefore, the aim of this study was to investigate the effect of decreasing CT dosage on RSA accuracy. Four cadaveric shoulder specimens were scanned using a normal-dose CT protocol and two low-dose protocols, where the dosage was decreased by 89% and 98%. 3D computer models of the humerus and scapula were created using each CT protocol. Bi-planar fluoroscopy was used to image five different static glenohumeral positions and two dynamic glenohumeral movements, of which a total of five static and four dynamic poses were selected for analysis. For standard RSA, negligible differences were found in bead (0.21±0.31mm) and bony landmark (2.31±1.90mm) locations when the CT dosage was decreased by 98% (p-values>0.167). For markerless RSA kinematic results, excellent agreement was found between the normal-dose and lowest-dose protocol, with all Spearman rank correlation coefficients greater than 0.95. Average root mean squared errors of 1.04±0.68mm and 2.42±0.81° were also found at this reduced dosage for static positions. In summary, CT dosage can be markedly reduced when performing shoulder RSA to minimize the risks of radiation exposure. Standard RSA accuracy was negligibly affected by the 98% CT dose reduction and for markerless RSA, the benefits of decreasing CT dosage to the subject outweigh the introduced errors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Feasibility Study for Markerless Tracking of Lung Tumors in Stereotactic Body Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richter, Anne, E-mail: richter_a3@klinik.uni-wuerzburg.d; Wilbert, Juergen; Baier, Kurt

    2010-10-01

    Purpose: To evaluate the feasibility and accuracy of a method for markerless tracking of lung tumors in electronic portal imaging device (EPID) movies and to analyze intra- and interfractional variations in tumor motion. Methods and Materials: EPID movies were acquired during stereotactic body radiotherapy (SBRT) given to 40 patients with 49 pulmonary targets and retrospectively analyzed. Tumor visibility and tracking accuracy were determined by three observers. Tumor motion of 30 targets was analyzed in detail via four-dimensional computed tomography (4DCT) and EPID in the superior-inferior direction for intra- and interfractional variations. Results: Tumor visibility was sufficient for markerless tracking inmore » 47% of the EPID movies. Tumor size and visibility in the DRR were correlated with visibility in the EPID images. The difference between automatic and manual tracking was a maximum of 2 mm for 98.3% in the x direction and 89.4% in the y direction. Motion amplitudes in 4DCT images (range, 0.7-17.9 mm; median, 4.9 mm) were closely correlated with amplitudes in the EPID movies. Intrafractional and interfractional variability of tumor motion amplitude were of similar magnitude: 1 mm on average to a maximum of 4 mm. A change in moving average of more than {+-}1 mm, {+-}2 mm, and {+-}4 mm were observed in 47.1%, 17.1%, and 4.5% of treatment time for all trajectories, respectively. Mean tumor velocity was 3.4 mm/sec, to a maximum 61 mm/sec. Conclusions: Tracking of pulmonary tumors in EPID images without implanted markers was feasible in 47% of all treatment beams. 4DCT is representative of the evaluation of mean breathing motion on average, but larger deviations occurred in target motion between treatment planning and delivery effort a monitoring during delivery.« less

  19. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  20. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  1. SVM based colon polyps classifier in a wireless active stereo endoscope.

    PubMed

    Ayoub, J; Granado, B; Mhanna, Y; Romain, O

    2010-01-01

    This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.

  2. TOPSAT: Global space topographic mission

    NASA Technical Reports Server (NTRS)

    Vetrella, Sergio

    1993-01-01

    Viewgraphs on TOPSAT Global Space Topographic Mission are presented. Topics covered include: polar region applications; terrestrial ecosystem applications; stereo electro-optical sensors; space-based stereoscopic missions; optical stereo approach; radar interferometry; along track interferometry; TOPSAT-VISTA system approach; ISARA system approach; topographic mapping laser altimeter; and role of multi-beam laser altimeter.

  3. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  4. A natural user interface to integrate citizen science and physical exercise.

    PubMed

    Palermo, Eduardo; Laut, Jeffrey; Nov, Oded; Cappa, Paolo; Porfiri, Maurizio

    2017-01-01

    Citizen science enables volunteers to contribute to scientific projects, where massive data collection and analysis are often required. Volunteers participate in citizen science activities online from their homes or in the field and are motivated by both intrinsic and extrinsic factors. Here, we investigated the possibility of integrating citizen science tasks within physical exercises envisaged as part of a potential rehabilitation therapy session. The citizen science activity entailed environmental mapping of a polluted body of water using a miniature instrumented boat, which was remotely controlled by the participants through their physical gesture tracked by a low-cost markerless motion capture system. Our findings demonstrate that the natural user interface offers an engaging and effective means for performing environmental monitoring tasks. At the same time, the citizen science activity increases the commitment of the participants, leading to a better motion performance, quantified through an array of objective indices. The study constitutes a first and necessary step toward rehabilitative treatments of the upper limb through citizen science and low-cost markerless optical systems.

  5. Parallax scanning methods for stereoscopic three-dimensional imaging

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.; Mayhew, Craig M.

    2012-03-01

    Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.

  6. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  7. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  8. Regio- and Stereo-Selective Oxidation of a Cardiovascular Drug, Metoprolol, Mediated by Cytochrome P450 2D and 3A Enzymes in Marmoset Livers.

    PubMed

    Uehara, Shotaro; Ishii, Sakura; Uno, Yasuhiro; Inoue, Takashi; Sasaki, Erika; Yamazaki, Hiroshi

    2017-08-01

    A β -blocker, metoprolol, is one of the in vivo probes for human cytochrome P450 (P450) 2D6. Investigation of nonhuman primate P450 enzymes helps to improve the accuracy of the extrapolation of pharmacokinetic data from animals into humans. Common marmosets ( Callithrix jacchus ) are a potential primate model for preclinical research, but the detailed roles of marmoset P450 enzymes in metoprolol oxidation remain unknown. In this study, regio- and stereo-selectivity of metoprolol oxidations by a variety of P450 enzymes in marmoset and human livers were investigated in vitro. Although liver microsomes from cynomolgus monkeys and rats preferentially mediated S -metoprolol O -demethylation and R -metoprolol α -hydroxylation, respectively, those from humans, marmosets, minipigs, and dogs preferentially mediated R -metoprolol O -demethylation, in contrast to the slow rates of R - and S -metoprolol oxidation in mouse liver microsomes. R - and S -metoprolol O -demethylation activities in marmoset livers were strongly inhibited by quinidine and ketoconazole, and were significantly correlated with bufuralol 1'-hydroxylation and midazolam 1'-hydroxylation activities and also with P450 2D and 3A4 contents, which is different from the case in human livers that did not have any correlations with P450 3A-mediated midazolam 1'-hydroxylation. Recombinant human P450 2D6 enzyme and marmoset P450 2D6/3A4 enzymes effectively catalyzed R -metoprolol O -demethylation, comparable to the activities of human and marmoset liver microsomes, respectively. These results indicated that the major roles of P450 2D enzymes for the regio- and stereo-selectivity of metoprolol oxidation were similar between human and marmoset livers, but the minor roles of P450 3A enzymes were unique to marmosets. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  9. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  10. Development Of A Flash X-Ray Scanner For Stereoradiography And CT

    NASA Astrophysics Data System (ADS)

    Endorf, Robert J.; DiBianca, Frank A.; Fritsch, Daniel S.; Liu, Wen-Ching; Burns, Charles B.

    1989-05-01

    We are developing a flash x-ray scanner for stereoradiography and CT which will be able to produce a stereoradiograph in 30 to 70 ns and a complete CT scan in one microsecond. This type of imaging device will be valuable in studying high speed processes, high acceleration, and traumatic events. We have built a two channel flash x-ray system capable of producing stereo radiographs with stereo angles of from 15 to 165 degrees. The dynamic and static Miff 's for the flash x-ray system were measured and compared with similar MIT's measured for a conventional medical x-ray system. We have written and tested a stereo reconstruction algorithm to determine three dimensional space points from corresponding points in the two stereo images. To demonstrate the ability of the system to image traumatic events, a radiograph was obtained of a bone undergoing a fracture. The effects of accelerations of up to 600 g were examined on radiographs taken of human kidney tissue samples in a rapidly rotating centrifuge. Feasibility studies of CT reconstruction have been performed by making simulated Cr images of various phantoms for larger flash x-ray systems of from 8 to 29 flash x-ray tubes.

  11. Kinder, gentler stereo

    NASA Astrophysics Data System (ADS)

    Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo

    1999-05-01

    Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.

  12. Topographic map of the western region of Dao Vallis in Hellas Planitia, Mars; MTM 500k -40/082E OMKT

    USGS Publications Warehouse

    Rosiek, Mark R.; Redding, Bonnie L.; Galuszka, Donna M.

    2006-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. Contours were derived from a digital terrain model (DTM) compiled on a digital photogrammetric workstation using Viking Orbiter stereo image pairs with orientation parameters derived from an analytic aerotriangulation. The image base for this map employs Viking Orbiter images from orbits 406 and 363. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models.

  13. Transparent volume imaging

    NASA Astrophysics Data System (ADS)

    Wixson, Steve E.

    1990-07-01

    Transparent Volume Imaging began with the stereo xray in 1895 and ended for most investigators when radiation safety concerns eliminated the second view. Today, similiar images can be generated by the computer without safety hazards providing improved perception and new means of image quantification. A volumetric workstation is under development based on an operational prototype. The workstation consists of multiple symbolic and numeric processors, binocular stereo color display generator with large image memory and liquid crystal shutter, voice input and output, a 3D pointer that uses projection lenses so that structures in 3 space can be touched directly, 3D hard copy using vectograph and lenticular printing, and presentation facilities using stereo 35mm slide and stereo video tape projection. Volumetric software includes a volume window manager, Mayo Clinic's Analyze program and our Digital Stereo Microscope (DSM) algorithms. The DSM uses stereo xray-like projections, rapidly oscillating motion and focal depth cues such that detail can be studied in the spatial context of the entire set of data. Focal depth cues are generated with a lens and apeture algorithm that generates a plane of sharp focus, and multiple stereo pairs each with a different plane of sharp focus are generated and stored in the large memory for interactive selection using a physical or symbolic depth selector. More recent work is studying non-linear focussing. Psychophysical studies are underway to understand how people perce ive images on a volumetric display and how accurately 3 dimensional structures can be quantitated from these displays.

  14. Slant Perception Under Stereomicroscopy.

    PubMed

    Horvath, Samantha; Macdonald, Kori; Galeotti, John; Klatzky, Roberta L

    2017-11-01

    Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.

  15. STEREO/Waves Education and Public Outreach

    NASA Astrophysics Data System (ADS)

    MacDowall, R. J.; Bougeret, J.; Bale, S. D.; Goetz, K.; Kaiser, M. L.

    2005-05-01

    We present the education and public outreach plan and activities of the STEREO Waves (aka SWAVES) investigation. SWAVES measures radio emissions from the solar corona, interplanetary medium, and terrestrial magnetosphere, as well as in situ waves in the solar wind. In addition to the web site components that display stereo/multi-spacecraft data in a graphical form and explain the science and instruments, we will focus on the following three areas of EPO: class-room demonstrations using models of the STEREO spacecraft with battery powered radio receivers (and speakers) to illustrate spacecraft radio direction finding, teacher developed and tested class-room activities using SWAVES solar radio observations to motivate geometry and trigonometry, and sound-based delivery of characteristic radio and plasma wave events from the SWAVES web site for accessibility and esthetic reasons. Examples of each element will be demonstrated.

  16. Stereo photo guide for estimating canopy fuel characteristics in conifer stands

    Treesearch

    Joe H. Scott; Elizabeth D. Reinhardt

    2005-01-01

    Stereo photographs, hemispherical photographs, and stand data are presented with associated biomass and canopy fuel characteristics for five Interior West conifer stands. Canopy bulk density, canopy base height, canopy biomass by component, available canopy fuel load, and vertical distribution of canopy fuel are presented for each plot at several stages of sampling,...

  17. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  18. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  19. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  20. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  1. Massive stereo-based DTM production for Mars on cloud computers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  2. Mobile markerless augmented reality and its application in forensic medicine.

    PubMed

    Kilgus, Thomas; Heim, Eric; Haase, Sven; Prüfer, Sabine; Müller, Michael; Seitel, Alexander; Fangerau, Markus; Wiebe, Tamara; Iszatt, Justin; Schlemmer, Heinz-Peter; Hornegger, Joachim; Yen, Kathrin; Maier-Hein, Lena

    2015-05-01

    During autopsy, forensic pathologists today mostly rely on visible indication, tactile perception and experience to determine the cause of death. Although computed tomography (CT) data is often available for the bodies under examination, these data are rarely used due to the lack of radiological workstations in the pathological suite. The data may prevent the forensic pathologist from damaging evidence by allowing him to associate, for example, external wounds to internal injuries. To facilitate this, we propose a new multimodal approach for intuitive visualization of forensic data and evaluate its feasibility. A range camera is mounted on a tablet computer and positioned in a way such that the camera simultaneously captures depth and color information of the body. A server estimates the camera pose based on surface registration of CT and depth data to allow for augmented reality visualization of the internal anatomy directly on the tablet. Additionally, projection of color information onto the CT surface is implemented. We validated the system in a postmortem pilot study using fiducials attached to the skin for quantification of a mean target registration error of [Formula: see text] mm. The system is mobile, markerless, intuitive and real-time capable with sufficient accuracy. It can support the forensic pathologist during autopsy with augmented reality and textured surfaces. Furthermore, the system enables multimodal documentation for presentation in court. Despite its preliminary prototype status, it has high potential due to its low price and simplicity.

  3. Construction of new cloning, lacZ reporter and scarless-markerless suicide vectors for genetic studies in Aggregatibacter actinomycetemcomitans

    PubMed Central

    Juárez-Rodríguez, María Dolores; Torres-Escobar, Ascención; Demuth, Donald R.

    2013-01-01

    To elucidate the putative function of a gene, effective tools are required for genetic characterization that facilitate its inactivation, deletion or modification on the bacterial chromosome. In the present study, the nucleotide sequence of the Escherichia coli/Aggregatibacter actinomycetemcomitans shuttle vector pYGK was determined, allowing us to redesign and construct a new shuttle cloning vector, pJT4, and promoterless lacZ transcriptional/translational fusion plasmids, pJT3 and pJT5. Plasmids pJT4 and pJT5 contain the origin of replication necessary to maintain shuttle vector replication. In addition, a new suicide vector, pJT1, was constructed for the generation of scarless and markerless deletion mutations of genes in the oral pathogen A. actinomycetemcomitans. Plasmid pJT1 is a pUC-based suicide vector that is counter-selectable for sucrose sensitivity. This vector does not leave antibiotic markers or scars on the chromosome after gene deletion and thus provides the option to combine several mutations in the same genetic background. The effectiveness of pJT1 was demonstrated by the construction of A. actinomycetemcomitans isogenic qseB single deletion (ΔqseB) mutant and lsrRK double deletion mutants (ΔlsrRK). These new vectors may offer alternatives for genetic studies in A. actinomycetemcomitans and other members of the HACEK (Haemophilus spp., A. actinomycetemcomitans, Cardiobacterium hominis, Eikenella corrodens, and Kingella kingae) group of Gram-negative bacteria. PMID:23353051

  4. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  5. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.

  6. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  7. Real-time stereo matching using orthogonal reliability-based dynamic programming.

    PubMed

    Gong, Minglun; Yang, Yee-Hong

    2007-03-01

    A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.

  8. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  9. A Measure of the Effectiveness of Incorporating 3D Human Anatomy into an Online Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Hilbelink, Amy J.

    2009-01-01

    Results of a study designed to determine the effectiveness of implementing three-dimensional (3D) stereo images of a human skull in an undergraduate human anatomy online laboratory were gathered and analysed. Mental model theory and its applications to 3D relationships are discussed along with the research results. Quantitative results on 62 pairs…

  10. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    PubMed

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  11. MRO CTX-based Digital Terrain Models

    NASA Astrophysics Data System (ADS)

    Dumke, Alexander

    2016-04-01

    In planetary surface sciences, digital terrain models (DTM) are paramount when it comes to understanding and quantifying processes. In this contribution an approach for the derivation of digital terrain models from stereo images of the NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) are described. CTX consists of a 350 mm focal length telescope and 5000 CCD sensor elements and is operated as pushbroom camera. It acquires images with ~6 m/px over a swath width of ~30 km of the Mars surface [1]. Today, several approaches for the derivation of CTX DTMs exist [e. g. 2, 3, 4]. The discussed approach here is based on established software and combines them with proprietary software as described below. The main processing task for the derivation of CTX stereo DTMs is based on six steps: (1) First, CTX images are radiometrically corrected using the ISIS software package [5]. (2) For selected CTX stereo images, exterior orientation data from reconstructed NAIF SPICE data are extracted [6]. (3) In the next step High Resolution Stereo Camera (HRSC) DTMs [7, 8, 9] are used for the rectification of CTX stereo images to reduce the search area during the image matching. Here, HRSC DTMs are used due to their higher spatial resolution when compared to MOLA DTMs. (4) The determination of coordinates of homologous points between stereo images, i.e. the stereo image matching process, consists of two steps: first, a cross-correlation to obtain approximate values and secondly, their use in a least-square matching (LSM) process in order to obtain subpixel positions. (5) The stereo matching results are then used to generate object points from forward ray intersections. (6) As a last step, the DTM-raster generation is performed using software developed at the German Aerospace Center, Berlin. Whereby only object points are used that have a smaller error than a threshold value. References: [1] Malin, M. C. et al., 2007, JGR 112, doi:10.1029/2006JE002808 [2] Broxton, M. J. et al., 2008, LPSC XXXIX, Abstract#2419 [3] Yershov, V. et al., 2015 EPSC 10, EPSC2015-343 [4] Kim, J. R. et al., 2013 EPS 65, 799-809 [5] https://isis.astrogeology.usgs.gov/index.html [6] http://naif.jpl.nasa.gov/naif/index.html [7] Gwinner et al., 2010, EPS 294, 543-540 [8] Gwinner et al., 2015, PSS [9] Dumke, A. et al., 2008, ISPRS, 37, Part B4, 1037-1042

  12. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  13. Correlating magnetoencephalography to stereo-electroencephalography in patients undergoing epilepsy surgery

    PubMed Central

    Murakami, Hiroatsu; Wang, Zhong I.; Marashly, Ahmad; Krishnan, Balu; Prayson, Richard A.; Kakisaka, Yosuke; Mosher, John C.; Bulacio, Juan; Gonzalez-Martinez, Jorge A.; Bingaman, William E.; Najm, Imad M.; Burgess, Richard C.; Alexopoulos, Andreas V.

    2016-01-01

    See Bear and Kirsch (doi:10.1093/aww248) for a scientific commentary on this article. Magnetoencephalography and stereo-electroencephalography are often necessary in the course of the non-invasive and invasive presurgical evaluation of challenging patients with medically intractable focal epilepsies. In this study, we aim to examine the significance of magnetoencephalography dipole clusters and their relationship to stereo-electroencephalography findings, area of surgical resection, and seizure outcome. We also aim to define the positive and negative predictors based on magnetoencephalography dipole cluster characteristics pertaining to seizure-freedom. Included in this retrospective study were a consecutive series of 50 patients who underwent magnetoencephalography and stereo-electroencephalography at the Cleveland Clinic Epilepsy Center. Interictal magnetoencephalography localization was performed using a single equivalent current dipole model. Magnetoencephalography dipole clusters were classified based on tightness and orientation criteria. Magnetoencephalography dipole clusters, stereo-electroencephalography findings and area of resection were reconstructed and examined in the same space using the patient’s own magnetic resonance imaging scan. Seizure outcomes at 1 year postoperative were dichotomized into seizure-free or not seizure-free. We found that patients in whom the magnetoencephalography clusters were completely resected had a much higher chance of seizure-freedom compared to the partial and no resection groups (P = 0.007). Furthermore, patients had a significantly higher chance of being seizure-free when stereo-electroencephalography completely sampled the area identified by magnetoencephalography as compared to those with incomplete or no sampling of magnetoencephalography results (P = 0.012). Partial concordance between magnetoencephalography and interictal or ictal stereo-electroencephalography was associated with a much lower chance of seizure freedom as compared to the concordant group (P = 0.0075). Patients with one single tight cluster on magnetoencephalography were more likely to become seizure-free compared to patients with a tight cluster plus scatter (P = 0.0049) or patients with loose clusters (P = 0.018). Patients whose magnetoencephalography clusters had a stable orientation perpendicular to the nearest major sulcus had a better chance of seizure-freedom as compared to other orientations (P = 0.042). Our data demonstrate that stereo-electroencephalography exploration and subsequent resection are more likely to succeed, when guided by positive magnetoencephalography findings. As a corollary, magnetoencephalography clusters should not be ignored when planning the stereo-electroencephalography strategy. Magnetoencephalography tight cluster and stable orientation are positive predictors for a good seizure outcome after resective surgery, whereas the presence of scattered sources diminishes the probability of favourable outcomes. The concordance pattern between magnetoencephalography and stereo-electroencephalography is a strong argument in favour of incorporating localization with non-invasive tools into the process of presurgical evaluation before actual placement of electrodes. PMID:27567464

  14. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  15. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  16. Stereo-tomography in triangulated models

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  17. Sounds of space: listening to the Sun-Earth connection

    NASA Astrophysics Data System (ADS)

    Craig, N.; Mendez, B.; Luhmann, J.; Sircar, I.

    2003-04-01

    NASA's STEREO/IMPACT Mission includes an Education and Public Outreach component that seeks to offer national programs for broad audiences highlighting the mission's solar and geo-space research. In an effort to make observations of the Sun more accessible and exciting for a general audience, we look for alternative ways to represent the data. Scientists most often represent data visually in images, graphs, and movies. However, any data can also be represented as sound audible to the human ear, a process known as sonification. We will present our plans for an exciting prototype program that converts the science results of solar energetic particle data to sound. We plan to make sounds, imagery, and data available to the public through the World Wide Web where they may create their own sonifications, as well as integrate this effort to a science museum kiosk format. The kiosk station would include information on the STEREO mission and monitors showing images of the Sun from each of STEREO's two satellites. Our goal is to incorporate 3D goggles and a headset into the kiosk, allowing visitors to see the current or archived images in 3D and hear stereo sounds resulting from sonification of the corresponding data. Ultimately, we hope to collaborate with composers and create musical works inspired by these sounds and related solar images.

  18. Characterization of ASTER GDEM Elevation Data over Vegetated Area Compared with Lidar Data

    NASA Technical Reports Server (NTRS)

    Ni, Wenjian; Sun, Guoqing; Ranson, Kenneth J.

    2013-01-01

    Current researches based on areal or spaceborne stereo images with very high resolutions (less than 1 meter) have demonstrated that it is possible to derive vegetation height from stereo images. The second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM) is a state-of-the-art global elevation data-set developed by stereo images. However, the resolution of ASTER stereo images (15 meters) is much coarser than areal stereo images, and the ASTER GDEM is compiled products from stereo images acquired over 10 years. The forest disturbances as well as forest growth are inevitable in 10 years time span. In this study, the features of ASTER GDEM over vegetated areas under both flat and mountainous conditions were investigated by comparisons with lidar data. The factors possibly affecting the extraction of vegetation canopy height considered include (1) co-registration of DEMs; (2) spatial resolution of digital elevation models (DEMs); (3) spatial vegetation structure; and (4) terrain slope. The results show that accurate co-registration between ASTER GDEM and the National Elevation Dataset (NED) is necessary over mountainous areas. The correlation between ASTER GDEM minus NED and vegetation canopy height is improved from 0.328 to 0.43 by degrading resolutions from 1 arc-second to 5 arc-seconds and further improved to 0.6 if only homogenous vegetated areas were considered.

  19. Modeling of Depth Cue Integration in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy

    2003-01-01

    Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.

  20. Determination of Cloud Base Height, Wind Velocity, and Short-Range Cloud Structure Using Multiple Sky Imagers Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Schwartz, Stephen E.; Yu, Dantong

    Clouds are a central focus of the U.S. Department of Energy (DOE)’s Atmospheric System Research (ASR) program and Atmospheric Radiation Measurement (ARM) Climate Research Facility, and more broadly are the subject of much investigation because of their important effects on atmospheric radiation and, through feedbacks, on climate sensitivity. Significant progress has been made by moving from a vertically pointing (“soda-straw”) to a three-dimensional (3D) view of clouds by investing in scanning cloud radars through the American Recovery and Reinvestment Act of 2009. Yet, because of the physical nature of radars, there are key gaps in ARM's cloud observational capabilities. Formore » example, cloud radars often fail to detect small shallow cumulus and thin cirrus clouds that are nonetheless radiatively important. Furthermore, it takes five to twenty minutes for a cloud radar to complete a 3D volume scan and clouds can evolve substantially during this period. Ground-based stereo-imaging is a promising technique to complement existing ARM cloud observation capabilities. It enables the estimation of cloud coverage, height, horizontal motion, morphology, and spatial arrangement over an extended area of up to 30 by 30 km at refresh rates greater than 1 Hz (Peng et al. 2015). With fine spatial and temporal resolution of modern sky cameras, the stereo-imaging technique allows for the tracking of a small cumulus cloud or a thin cirrus cloud that cannot be detected by a cloud radar. With support from the DOE SunShot Initiative, the Principal Investigator (PI)’s team at Brookhaven National Laboratory (BNL) has developed some initial capability for cloud tracking using multiple distinctly located hemispheric cameras (Peng et al. 2015). To validate the ground-based cloud stereo-imaging technique, the cloud stereo-imaging field campaign was conducted at the ARM Facility’s Southern Great Plains (SGP) site in Oklahoma from July 15 to December 24. As shown in Figure 1, the cloud stereo-imaging system consisted of two inexpensive high-definition (HD) hemispheric cameras (each cost less than $1,500) and ARM’s Total Sky Imager (TSI). Together with other co-located ARM instrumentation, the campaign provides a promising opportunity to validate stereo-imaging-based cloud base height and, more importantly, to examine the feasibility of cloud thickness retrieval for low-view-angle clouds.« less

  1. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  2. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  3. A long baseline global stereo matching based upon short baseline estimation

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  4. Biomechanical analysis of three tennis serve types using a markerless system.

    PubMed

    Abrams, Geoffrey D; Harris, Alex H S; Andriacchi, Thomas P; Safran, Marc R

    2014-02-01

    The tennis serve is commonly associated with musculoskeletal injury. Advanced players are able to hit multiple serve types with different types of spin. No investigation has characterised the kinematics of all three serve types for the upper extremity and back. Seven NCAA Division I male tennis players performed three successful flat, kick and slice serves. Serves were recorded using an eight camera markerless motion capture system. Laser scanning was utilised to accurately collect body dimensions and data were computed using inverse kinematic methods. There was no significant difference in maximum back extension angle for the flat, kick or slice serves. The kick serve had a higher force magnitude at the back than the flat and slice as well as larger posteriorly directed shoulder forces. The flat serve had significantly greater maximum shoulder internal rotation velocity versus the slice serve. Force and torque magnitudes at the elbow and wrist were not significantly different between the serves. The kick serve places higher physical demands on the back and shoulder while the slice serve demonstrated lower overall kinetic forces. This information may have injury prevention and rehabilitation implications.

  5. A natural user interface to integrate citizen science and physical exercise

    PubMed Central

    Palermo, Eduardo; Laut, Jeffrey; Nov, Oded; Porfiri, Maurizio

    2017-01-01

    Citizen science enables volunteers to contribute to scientific projects, where massive data collection and analysis are often required. Volunteers participate in citizen science activities online from their homes or in the field and are motivated by both intrinsic and extrinsic factors. Here, we investigated the possibility of integrating citizen science tasks within physical exercises envisaged as part of a potential rehabilitation therapy session. The citizen science activity entailed environmental mapping of a polluted body of water using a miniature instrumented boat, which was remotely controlled by the participants through their physical gesture tracked by a low-cost markerless motion capture system. Our findings demonstrate that the natural user interface offers an engaging and effective means for performing environmental monitoring tasks. At the same time, the citizen science activity increases the commitment of the participants, leading to a better motion performance, quantified through an array of objective indices. The study constitutes a first and necessary step toward rehabilitative treatments of the upper limb through citizen science and low-cost markerless optical systems. PMID:28231261

  6. Multiview photometric stereo.

    PubMed

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  7. Three-dimensional surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.

    2009-10-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  8. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  9. Indoor calibration for stereoscopic camera STC: a new method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.

  10. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.

  11. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  12. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  13. Robust stereo matching with trinary cross color census and triple image-based refinements

    NASA Astrophysics Data System (ADS)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  14. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  15. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  16. Stereo tests as a screening tool for strabismus: which is the best choice?

    PubMed Central

    Ancona, Chiara; Stoppani, Monica; Odazio, Veronica; La Spina, Carlo; Corradetti, Giulia; Bandello, Francesco

    2014-01-01

    Purpose To compare four stereo tests (Lang I, Lang II, Titmus, and TNO) and assess their effectiveness. The main focus of this study is to identify the most useful stereo test as a challenging tool in the screening of strabismus. Patients and methods A total of 143 Caucasian subjects, 74 males (52%) and 69 females (48%), aged between 4 years and 78 years (mean age 19.09±15.12 years) were examined at our Strabismus Service (Scientific Institute San Raffaele Hospital, Milan, Italy) and included in this observational cross-sectional study. Subjects recruited in this study were either affected by strabismus, including microstrabismic patients, or healthy volunteers. Subjects affected by ophthalmological diseases, other than strabismus, were excluded. All patients underwent both ophthalmological and orthoptic examination, including stereo tests, Hirschberg Corneal Light Reflex Test, Worth Four-Dot Test, the 4 Prism Diopter Base-Out Test, Cover Testing, Bruckner Test, visual acuity, automated refraction under 1% tropicamide cycloplegia and thereafter, posterior pole evaluation. Results All data were processed using the IBM SPSS Statistics, Version 2.0, to perform all statistical calculations. The main finding of this study is that Lang I stereo test achieved the highest sensitivity (89.8%) and specificity (95.2%) in detecting strabismus, including microstrabismus as well, compared to all the other stereoacuity tests. Furthermore, Lang I is the stereo test with the highest positive predictive value and negative predictive value, both greater than 90%. Conclusion The stereo test with the highest sensitivity, specificity, positive predictive value, and negative predictive value is Lang I. These results suggest its applicability as a screening test for strabismus in people older than 4 years. PMID:25419114

  17. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  18. SAD-Based Stereo Matching Using FPGAs

    NASA Astrophysics Data System (ADS)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  19. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  20. Jovian decametric radiation seen from Juno, Cassini, STEREO A, WIND, and Earth-based radio observatories

    NASA Astrophysics Data System (ADS)

    Imai, M.; Kurth, W. S.; Hospodarsky, G. B.; Bolton, S. J.; Connerney, J. E. P.; Levin, S. M.; Lecacheux, A.; Lamy, L.; Zarka, P.; Clarke, T. E.; Higgins, C. A.

    2017-09-01

    Jupiter's decametric (DAM) radiation is generated very close to the local gyrofrequency by the electron cyclotron maser instability (CMI). The first two-point common detections of Jovian DAM radiation were made using the Voyager spacecraft and ground-based radio observatories in early 1979, but, due to geometrical constraints and limited flyby duration, a full understanding of the latitudinal beaming of Jovian DAM radiation remains elusive. The stereoscopic DAM radiation viewed from Juno, Cassini, STEREO A, WIND, and Earth-based radio observatories provides a unique opportunity to analyze the CMI emission mechanism and beaming properties.

  1. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  2. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  3. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  4. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  5. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  6. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  7. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization

    PubMed Central

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-01-01

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096

  8. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization.

    PubMed

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-08-24

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.

  9. Passive perception system for day/night autonomous off-road navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.

    2005-05-01

    Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.

  10. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  11. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  12. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  13. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction

    NASA Astrophysics Data System (ADS)

    Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''

  14. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  15. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  16. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction.

    PubMed

    Ródenas, Josep A; Aarts, Ronald M; Janssen, A J E M

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern l(opt) that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived l(opt). Informal listening tests have shown that the l(opt) worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as "Position-Independent (PI) stereo."

  17. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    NASA Astrophysics Data System (ADS)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity map for the stereo pair and each correspondence is transformed back to the owner and 3D points are derived through photogrammetric space intersection. Experimental results reveal that the proposed method is able to reduce gaps and inconsistencies caused by the inaccurate boresight offsets between the two NAC cameras and the irregular overlapping regions, and finally generate precise and consistent 3D surface models from the NAC stereo images automatically.

  18. Developing stereo image based robot control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suprijadi,; Pambudi, I. R.; Woran, M.

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less

  19. KSC-06pd2389

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower (right) begins to roll away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  20. KSC-06pd2388

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower begins to roll away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  1. KSC-06pd2390

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The mobile service tower (left) rolls away from the STEREO spacecraft aboard the Delta II launch vehicle in preparation for launch. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  2. KSC-06pd2394

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The Delta II launch vehicle carrying the STEREO spacecraft hurtles through the smoke and steam after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. Liftoff was at 8:52 p.m. EDT. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results.

  3. KSC-06pd2401

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - The Delta II rocket carrying the STEREO spacecraft on top streaks through the smoke as it climbs to orbit. Liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station was at 8:52 p.m. EDT. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results.

  4. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  5. Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations from SOHO and STEREO

    NASA Technical Reports Server (NTRS)

    Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji

    2011-01-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.

  6. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  7. Stereo Correspondence Using Moment Invariants

    NASA Astrophysics Data System (ADS)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  8. Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration

    PubMed Central

    Chen, Shoubin; Liu, Jingbin; Huang, Wenchao

    2018-01-01

    The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540

  9. 3D structure and kinematics characteristics of EUV wave front

    NASA Astrophysics Data System (ADS)

    Podladchikova, T.; Veronig, A.; Dissauer, K.

    2017-12-01

    We present 3D reconstructions of EUV wave fronts using multi-point observations from the STEREO-A and STEREO-B spacecraft. EUV waves are large-scale disturbances in the solar corona that are initiated by coronal mass ejections, and are thought to be large-amplitude fast-mode MHD waves or shocks. The aim of our study is to investigate the dynamic evolution of the 3D structure and wave kinematics of EUV wave fronts. We study the events on December 7, 2007 and February 13, 2009 using data from the STEREO/EUVI-A and EUVI-B instruments in the 195 Å filter. The proposed approach is based on a complementary combination of epipolar geometry of stereo vision and perturbation profiles. We propose two different solutions to the matching problem of the wave crest on images from the two spacecraft. One solution is suitable for the early and maximum stage of event development when STEREO-A and STEREO-B see the different facets of the wave, and the wave crest is clearly outlined. The second one is applicable also at the later stage of event development when the wave front becomes diffuse and is faintly visible. This approach allows us to identify automatically the segments of the diffuse front on pairs of STEREO-A and STEREO-B images and to solve the problem of identification and matching of the objects. We find that the EUV wave observed on December 7, 2007 starts with a height of 30-50 Mm, sharply increases to a height of 100-120 Mm about 10 min later, and decreases to 10-20 Mm in the decay phase. Including the 3D evolution of the EUV wave front allowed us to correct the wave kinematics for projection and changing height effects. The velocity of the wave crest (V=215-266 km/s) is larger than the trailing part of the wave pulse (V=103-163 km/s). For the February 9, 2009 event, the upward movement of the wave crest shows an increase from 20 to 100 Mm over a period of 30 min. The velocity of wave crest reaches values of 208-211 km/s.

  10. An Evaluation of Stereoscopic Digital Mammography for Earlier Detection of Breast Cancer and Reduced Rate of Recall

    DTIC Science & Technology

    2004-08-01

    on a pair of high -resolution, LCD medical monitors. The change to the new workstation has required us to rewrite the software... In the original CRT-based system, the two 7 images forming a stereo pair were displayed alternately on the same CRT face, at a high frame rate (120 Hz...then, separately, receive the stereo screening exam on the research GE digital mammography unit.

  11. VERDEX: A virtual environment demonstrator for remote driving applications

    NASA Technical Reports Server (NTRS)

    Stone, Robert J.

    1991-01-01

    One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.

  12. [Construction of Corynebacterium crenatum AS 1.542 δ argR and analysis of transcriptional levels of the related genes of arginine biosynthetic pathway].

    PubMed

    Chen, Xuelan; Tang, Li; Jiao, Haitao; Xu, Feng; Xiong, Yonghua

    2013-01-04

    ArgR, coded by the argR gene from Corynebacterium crenatum AS 1.542, acts as a negative regulator in arginine biosynthetic pathway. However, the effect of argR on transcriptional levels of the related biosynthetic genes has not been reported. Here, we constructed a deletion mutant of argR gene: C. crenatum AS 1.542 Delta argR using marker-less knockout technology, and compared the changes of transcriptional levels of the arginine biosynthetic genes between the mutant strain and the wild-type strain. We used marker-less knockout technology to construct C. crenatum AS 1.542 Delta argR and analyzed the changes of the relate genes at the transcriptional level using real-time fluorescence quantitative PCR. C. crenatum AS 1.542 Delta argR was successfully obtained and the transcriptional level of arginine biosynthetic genes in this mutant increased significantly with an average of about 162.1 folds. The arginine biosynthetic genes in C. crenatum are clearly controlled by the negative regulator ArgR. However, the deletion of this regulator does not result in a clear change in arginine production in the bacteria.

  13. A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images

    NASA Astrophysics Data System (ADS)

    Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi

    This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.

  14. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  15. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  16. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  17. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    NASA Astrophysics Data System (ADS)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  18. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  19. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  20. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    NASA Astrophysics Data System (ADS)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  1. Phoenix Checks out its Work Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This animation shows a mosaic of images of the workspace reachable by the scoop on the robotic arm of NASA's Phoenix Mars Lander, along with some measurements of rock sizes.

    Phoenix was able to determine the size of the rocks based on three-dimensional views from stereoscopic images taken by the lander's 7-foot mast camera, called the Surface Stereo Imager. The stereo pair of images enable depth perception, much the way a pair of human eyes enable people to gauge the distance to nearby objects.

    The rock measurements were made by a visualization tool known as Viz, developed at NASA's Ames Research Laboratory. The shadow cast by the camera on the Martian surface appears somewhat disjointed because the camera took the images in the mosaic at different times of day.

    Scientists do not yet know the origin or composition of the flat, light-colored rocks on the surface in front of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  3. Development of a stereoscopic three-dimensional drawing application

    NASA Astrophysics Data System (ADS)

    Carver, Donald E.; McAllister, David F.

    1991-08-01

    With recent advances in 3-D technology, computer users have the opportunity to work within a natural 3-D environment; a flat panel LCD computer display of this type, the DTI-100M made by Dimension Technologies, Inc., recently went on the market. In a joint venture between DTI and NCSU, an object-oriented 3-D drawing application, 3-D Draw, was developed to address some issues of human interface design for interactive stereo drawing applications. The focus of this paper is to determine some of the procedures a user would naturally expect to follow while working within a true 3-D environment. The paper discusses (1) the interface between the Macintosh II and DTI-100M during implementation of 3-D Draw, including stereo cursor development and presentation of current 2-D systems, with an additional `depth'' parameter, in the 3-D world, (2) problems in general for human interface into the 3-D environment, and (3) necessary functions and/or problems in developing future stereoscopic 3-D operating systems/tools.

  4. Combined DEM Extration Method from StereoSAR and InSAR

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.

    2015-06-01

    A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.

  5. A search for Ganymede stereo images and 3D mapping opportunities

    NASA Astrophysics Data System (ADS)

    Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.

    2017-10-01

    We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.

  6. Sensitivity Monitoring of the SECCHI COR1 Telescopes on STEREO

    NASA Astrophysics Data System (ADS)

    Thompson, William T.

    2018-03-01

    Measurements of bright stars passing through the fields of view of the inner coronagraphs (COR1) on board the Solar Terrestrial Relations Observatory (STEREO) are used to monitor changes in the radiometric calibration over the course of the mission. Annual decline rates are found to be 0.648 ± 0.066%/year for COR1-A on STEREO Ahead and 0.258 ± 0.060%/year for COR1-B on STEREO Behind. These rates are consistent with decline rates found for other space-based coronagraphs in similar radiation environments. The theorized cause for the decline in sensitivity is darkening of the lenses and other optical elements due to exposure to high-energy solar particles and photons, although other causes are also possible. The total decline in the COR-B sensitivity when contact with Behind was lost on 1 October 2014 was 1.7%, while COR1-A was down by 4.4%. As of 1 November 2017, the COR1-A decline is estimated to be 6.4%. The SECCHI calibration routines will be updated to take these COR1 decline rates into account.

  7. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  8. The Eyephone: a head-mounted stereo display

    NASA Astrophysics Data System (ADS)

    Teitel, Michael A.

    1990-09-01

    Head mounted stereo displays for virtual environments and computer simulations have been made since 1969. Most of the recent displays have been based on monochrome (black and white) liquid crystal display technology. Color LCD displays have generally not been used due to their lower resolution and color triad structure. As the resolution of color LCDdisplays is increasing we have begun to use color displays in our Eyephone. In this paper we describe four methods for minimizing the effect of the color triads in the magnified images of LCD displays in the Eyephone stereo head mounted display. We have settled on the use of wavefront randomizer with a spatial frequency enhancement overlay in order to blur the triacis in the displays while keeping the perceived resolution of the display high.

  9. A multi-modal stereo microscope based on a spatial light modulator.

    PubMed

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  10. Markerless gating for lung cancer radiotherapy based on machine learning techniques

    NASA Astrophysics Data System (ADS)

    Lin, Tong; Li, Ruijiang; Tang, Xiaoli; Dy, Jennifer G.; Jiang, Steve B.

    2009-03-01

    In lung cancer radiotherapy, radiation to a mobile target can be delivered by respiratory gating, for which we need to know whether the target is inside or outside a predefined gating window at any time point during the treatment. This can be achieved by tracking one or more fiducial markers implanted inside or near the target, either fluoroscopically or electromagnetically. However, the clinical implementation of marker tracking is limited for lung cancer radiotherapy mainly due to the risk of pneumothorax. Therefore, gating without implanted fiducial markers is a promising clinical direction. We have developed several template-matching methods for fluoroscopic marker-less gating. Recently, we have modeled the gating problem as a binary pattern classification problem, in which principal component analysis (PCA) and support vector machine (SVM) are combined to perform the classification task. Following the same framework, we investigated different combinations of dimensionality reduction techniques (PCA and four nonlinear manifold learning methods) and two machine learning classification methods (artificial neural networks—ANN and SVM). Performance was evaluated on ten fluoroscopic image sequences of nine lung cancer patients. We found that among all combinations of dimensionality reduction techniques and classification methods, PCA combined with either ANN or SVM achieved a better performance than the other nonlinear manifold learning methods. ANN when combined with PCA achieves a better performance than SVM in terms of classification accuracy and recall rate, although the target coverage is similar for the two classification methods. Furthermore, the running time for both ANN and SVM with PCA is within tolerance for real-time applications. Overall, ANN combined with PCA is a better candidate than other combinations we investigated in this work for real-time gated radiotherapy.

  11. Dynamic 3D scanning as a markerless method to calculate multi-segment foot kinematics during stance phase: methodology and first application.

    PubMed

    Van den Herrewegen, Inge; Cuppens, Kris; Broeckx, Mario; Barisch-Fritz, Bettina; Vander Sloten, Jos; Leardini, Alberto; Peeraer, Louis

    2014-08-22

    Multi-segmental foot kinematics have been analyzed by means of optical marker-sets or by means of inertial sensors, but never by markerless dynamic 3D scanning (D3DScanning). The use of D3DScans implies a radically different approach for the construction of the multi-segment foot model: the foot anatomy is identified via the surface shape instead of distinct landmark points. We propose a 4-segment foot model consisting of the shank (Sha), calcaneus (Cal), metatarsus (Met) and hallux (Hal). These segments are manually selected on a static scan. To track the segments in the dynamic scan, the segments of the static scan are matched on each frame of the dynamic scan using the iterative closest point (ICP) fitting algorithm. Joint rotations are calculated between Sha-Cal, Cal-Met, and Met-Hal. Due to the lower quality scans at heel strike and toe off, the first and last 10% of the stance phase is excluded. The application of the method to 5 healthy subjects, 6 trials each, shows a good repeatability (intra-subject standard deviations between 1° and 2.5°) for Sha-Cal and Cal-Met joints, and inferior results for the Met-Hal joint (>3°). The repeatability seems to be subject-dependent. For the validation, a qualitative comparison with joint kinematics from a corresponding established marker-based multi-segment foot model is made. This shows very consistent patterns of rotation. The ease of subject preparation and also the effective and easy to interpret visual output, make the present technique very attractive for functional analysis of the foot, enhancing usability in clinical practice. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Registration of clinical volumes to beams-eye-view images for real-time tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield unitsmore » into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.« less

  13. STEREO Space Weather and the Space Weather Beacon

    NASA Technical Reports Server (NTRS)

    Biesecker, D. A.; Webb, D F.; SaintCyr, O. C.

    2007-01-01

    The Solar Terrestrial Relations Observatory (STEREO) is first and foremost a solar and interplanetary research mission, with one of the natural applications being in the area of space weather. The obvious potential for space weather applications is so great that NOAA has worked to incorporate the real-time data into their forecast center as much as possible. A subset of the STEREO data will be continuously downlinked in a real-time broadcast mode, called the Space Weather Beacon. Within the research community there has been considerable interest in conducting space weather related research with STEREO. Some of this research is geared towards making an immediate impact while other work is still very much in the research domain. There are many areas where STEREO might contribute and we cannot predict where all the successes will come. Here we discuss how STEREO will contribute to space weather and many of the specific research projects proposed to address STEREO space weather issues. We also discuss some specific uses of the STEREO data in the NOAA Space Environment Center.

  14. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    PubMed

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  15. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  16. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    PubMed Central

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  17. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  18. Design of an off-axis visual display based on a free-form projection screen to realize stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong

    2017-10-01

    A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.

  19. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    NASA Astrophysics Data System (ADS)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  20. Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.

    PubMed

    Raposo, Carolina; Antunes, Michel; P Barreto, Joao

    2017-08-09

    The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.

  1. Sampling artifacts in perspective and stereo displays

    NASA Astrophysics Data System (ADS)

    Pfautz, Jonathan D.

    2001-06-01

    The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.

  2. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  3. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    PubMed Central

    Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica

    2015-01-01

    Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411

  4. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  5. An Evaluation of the Effectiveness of Stereo Slides in Teaching Geomorphology.

    ERIC Educational Resources Information Center

    Giardino, John R.; Thornhill, Ashton G.

    1984-01-01

    Provides information about producing stereo slides and their use in the classroom. Describes an evaluation of the teaching effectiveness of stereo slides using two groups of 30 randomly selected students from introductory geomorphology. Results from a pretest/postttest measure show that stereo slides significantly improved understanding. (JM)

  6. Performance Evaluation of Dsm Extraction from ZY-3 Three-Line Arrays Imagery

    NASA Astrophysics Data System (ADS)

    Xue, Y.; Xie, W.; Du, Q.; Sang, H.

    2015-08-01

    ZiYuan-3 (ZY-3), launched in January 09, 2012, is China's first civilian high-resolution stereo mapping satellite. ZY-3 is equipped with three-line scanners (nadir, backward and forward) for stereo mapping, the resolutions of the panchromatic (PAN) stereo mapping images are 2.1-m at nadir looking and 3.6-m at tilt angles of ±22° forward and backward looking, respectively. The stereo base-height ratio is 0.85-0.95. Compared with stereo mapping from two views images, three-line arrays images of ZY-3 can be used for DSM generation taking advantage of one more view than conventional photogrammetric methods. It would enrich the information for image matching and enhance the accuracy of DSM generated. The primary result of positioning accuracy of ZY-3 images has been reported, while before the massive mapping applications of utilizing ZY-3 images for DSM generation, the performance evaluation of DSM extraction from three-line arrays imagery of ZY-3 has significant meaning for the routine mapping applications. The goal of this research is to clarify the mapping performance of ZY-3 three-line arrays scanners on china's first civilian high-resolution stereo mapping satellite of ZY-3 through the accuracy evaluation of DSM generation. The comparison of DSM product in different topographic areas generated with three views images with different two views combination images of ZY-3 would be presented. Besides the comparison within different topographic study area, the accuracy deviation of the DSM products with different grid size including 25-m, 10-m and 5-m is delineated in order to clarify the impact of grid size on accuracy evaluation.

  7. An automated, open-source (NASA Ames Stereo Pipeline) workflow for mass production of high-resolution DEMs from commercial stereo satellite imagery: Application to mountain glacies in the contiguous US

    NASA Astrophysics Data System (ADS)

    Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.

    2016-12-01

    We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5­ m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.

  8. Genome-wide Screening Identifies Phosphotransferase System Permease BepA to Be Involved in Enterococcus faecium Endocarditis and Biofilm Formation.

    PubMed

    Paganelli, Fernanda L; Huebner, Johannes; Singh, Kavindra V; Zhang, Xinglin; van Schaik, Willem; Wobser, Dominique; Braat, Johanna C; Murray, Barbara E; Bonten, Marc J M; Willems, Rob J L; Leavis, Helen L

    2016-07-15

    Enterococcus faecium is a common cause of nosocomial infections, of which infective endocarditis is associated with substantial mortality. In this study, we used a microarray-based transposon mapping (M-TraM) approach to evaluate a rat endocarditis model and identified a gene, originally annotated as "fruA" and renamed "bepA," putatively encoding a carbohydrate phosphotransferase system (PTS) permease (biofilm and endocarditis-associated permease A [BepA]), as important in infective endocarditis. This gene is highly enriched in E. faecium clinical isolates and absent in commensal isolates that are not associated with infection. Confirmation of the phenotype was established in a competition experiment of wild-type and a markerless bepA mutant in a rat endocarditis model. In addition, deletion of bepA impaired biofilm formation in vitro in the presence of 100% human serum and metabolism of β-methyl-D-glucoside. β-glucoside metabolism has been linked to the metabolism of glycosaminoglycans that are exposed on injured heart valves, where bacteria attach and form vegetations. Therefore, we propose that the PTS permease BepA is directly implicated in E. faecium pathogenesis. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  9. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  10. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-09-26

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  11. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  12. Mapping and localization for extraterrestrial robotic explorations

    NASA Astrophysics Data System (ADS)

    Xu, Fengliang

    In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)

  13. Determination of Geometric and Kinematical Parameters of Coronal Mass Ejections Using STEREO Data

    NASA Astrophysics Data System (ADS)

    Fainshtein, V. G.; Tsivileva, D. M.; Kashapova, L. K.

    2010-03-01

    We present a new, relatively simple and fast method to determine true geometric and kinematical CME parameters from simultaneous STEREO A, B observations of CMEs. These parameters are the three-dimensional direction of CME propagation, velocity and acceleration of CME front, CME angular sizes and front position depending on time. The method is based on the assumption that CME shape may be described by a modification of so-called ice-cream cone models. The method has been tested for several CMEs.

  14. The Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations of the 2011 February 15 CME

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Makela, P.; Yashiro, S.; Davila, J. M.

    2012-08-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009a). The STEREO spacecraft were in qudrature with SOHO (STEREO-A ahead of Earth by 87oand STEREO-B 94obehind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp) and radial speed (Vrad) derived previously from geometrical considerations (Gopalswamy et al. 2009a): Vrad=1/2 (1 + cot w)Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 7 6o, so w=3 8o. This gives the relation as Vrad=1.1 4 Vexp. From LASCO observations, we measured Vexp=897 km/s, so we get the radial speed as 10 2 3 km/s. Direct measurement of radial speed yields 945 km/s (STEREO-A) and 105 8 km/s (STEREO-B). These numbers are different only by 7.6 % and 3.4 % (for STEREO-A and STEREO-B, respectively) from the computed value.

  15. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  16. Deep convolutional neural network processing of aerial stereo imagery to monitor vulnerable zones near power lines

    NASA Astrophysics Data System (ADS)

    Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed

    2018-01-01

    The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.

  17. Stereoscopically Observing Manipulative Actions

    PubMed Central

    Ferri, S.; Pauwels, K.; Rizzolatti, G.; Orban, G. A.

    2016-01-01

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors “stimulus type” (action, static control, and dynamic control), “stereopsis” (present, absent) and “viewpoint” (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. PMID:27252350

  18. KSC-06pd2391

    NASA Image and Video Library

    2006-10-25

    KENNEDY SPACE CENTER, FLA. - After the mobile service tower has rolled away, the Delta II rocket with the STEREO spacecraft at top stands alone next to the launch gantry. Liftoff is scheduled in a window between 8:38 and 8:53 p.m. on Oct. 25. STEREO (Solar Terrestrial Relations Observatory) is a two-year mission using two nearly identical observatories, one ahead of Earth in its orbit and the other trailing behind. The duo will provide 3-D measurements of the sun and its flow of energy, enabling scientists to study the nature of coronal mass ejections and why they happen. The ejections are a major source of the magnetic disruptions on Earth and are a key component of space weather. The disruptions can greatly effect satellite operations, communications, power systems, humans in space and global climate. Designed and built by the Johns Hopkins University Applied Physics Laboratory (APL) , the STEREO mission is being managed by NASA Goddard Space Flight Center. APL will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Kim Shiflett

  19. Stereoscopically Observing Manipulative Actions.

    PubMed

    Ferri, S; Pauwels, K; Rizzolatti, G; Orban, G A

    2016-08-01

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors "stimulus type" (action, static control, and dynamic control), "stereopsis" (present, absent) and "viewpoint" (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. © The Author 2016. Published by Oxford University Press.

  20. Quantitative analysis of digital outcrop data obtained from stereo-imagery using an emulator for the PanCam camera system for the ExoMars 2020 rover

    NASA Astrophysics Data System (ADS)

    Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu

    2017-04-01

    A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover position can be measured to aid prioritisation of science targeting. Where grains or rocks are present and visible, their dimensions can be measured. Interpretation of the sedimentological features of the outcrops has also been carried out. OPCs created from stereo imagery collected in the Hanskville-Burpee Quarry showed a general coarsening-up succession with a red, well-layered mudstone overlain by stacked layers of irregular thickness and medium-coarse to pebbly sandstone layers. Cross beds/laminations, and lenses of finer sandstone were common. These features provide valuable information on their depositional environment. Development of Pro3D in preparation for application to the ExoMars 2020 and NASA 2020 missions will be centred on validation of the data and measurements. Collection of in-situ field data by a human geologist allows for direct comparison of viewer-derived measurements with those taken in the field. The research leading to these results has received funding from the UK Space Agency Aurora programme and the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE, ESA PRODEX Contracts 4000105568 "ExoMars PanCam 3D Vision" and 4000116566 "Mars 2020 Mastcam-Z 3D Vision".

  1. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  2. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  3. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  4. Real-time photorealistic stereoscopic rendering of fire

    NASA Astrophysics Data System (ADS)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  5. The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors.

    PubMed

    Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin

    2017-02-22

    Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs.

  6. The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors

    PubMed Central

    Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin

    2017-01-01

    Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs. PMID:28241437

  7. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  8. Building Change Detection in Very High Resolution Satellite Stereo Image Time Series

    NASA Astrophysics Data System (ADS)

    Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.

    2016-06-01

    There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  9. Target-based calibration method for multifields of view measurement using multiple stereo digital image correlation systems

    NASA Astrophysics Data System (ADS)

    Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan

    2017-12-01

    Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.

  10. Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation

    PubMed Central

    Przemyslaw, Baranski; Pawel, Strumillo

    2012-01-01

    The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321

  11. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  12. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  13. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  14. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

  15. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  16. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  17. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  18. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  19. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  20. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation obstruction measurements for this project. After collecting ancillary data such as GPS locations, South Carolina Geodetic Survey and Aero Dynamics ground survey points to set up the OrthoBASE Block File, measurements were taken of the various glide path obstructions and compared to the validation dataset. This process yielded the following conclusions: The IKONOS stereo model in conjunction with Imagine OrthoBASE can provide The LPA Group with a fast and cost efficient method for assessing aviation obstructions. Also, by creating our own stereo model we achieved any accuracy better currently available commercial products.

  1. Clinical study of quantitative diagnosis of early cervical cancer based on the classification of acetowhitening kinetics

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.

    2010-03-01

    A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.

  2. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    PubMed Central

    Reina, Giulio; Milella, Annalisa

    2012-01-01

    Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  3. Latent stereopsis for motion in depth in strabismic amblyopia.

    PubMed

    Hess, Robert F; Mansouri, Behzad; Thompson, Benjamin; Gheorghiu, Elena

    2009-10-01

    To investigate the residual stereo function of a group of 15 patients with strabismic amblyopia, by using motion-in-depth stimuli that allow discrimination of contributions from local disparity as opposed to those from local velocity mechanisms as a function of the rate of depth change. The stereo performance (percentage correct) was measured as a function of the rate of depth change for dynamic random dot stimuli that were either temporally correlated or uncorrelated. Residual stereoscopic function was demonstrated for motion in depth based on local disparity information in 2 of the 15 observers with strabismic amblyopia. The use of a neutral-density (ND) filter in front of the fixing eye enhanced motion-in-depth performance in four subjects randomly selected from the group that originally displayed only chance performance. This finding was true across temporal rate and for correlated and uncorrelated stimuli, suggesting that it was disparity based. The opposite occurred in a group of normal subjects. In a separate experiment, the hypothesis was that the beneficial effect of the ND filter is due to its contrast and/or mean luminance-reducing effects rather than any interocular time delay that it may introduce and that it is specific to motion-in-depth performance, as similar improvements were not found for static stereopsis. A small proportion of observers with strabismic amblyopia exhibit residual performance for motion in depth, and it is disparity based. Furthermore, some observers with strabismic amblyopia who do not display any significant stereo performance for motion in depth under normal binocular viewing may display above-chance stereo performance if the degree of interocular suppression is reduced. The authors term this phenomenon latent stereopsis.

  4. LWIR passive perception system for stealthy unmanned ground vehicle night operations

    NASA Astrophysics Data System (ADS)

    Lee, Daren; Rankin, Arturo; Huertas, Andres; Nash, Jeremy; Ahuja, Gaurav; Matthies, Larry

    2016-05-01

    Resupplying forward-deployed units in rugged terrain in the presence of hostile forces creates a high threat to manned air and ground vehicles. An autonomous unmanned ground vehicle (UGV) capable of navigating stealthily at night in off-road and on-road terrain could significantly increase the safety and success rate of such resupply missions for warfighters. Passive night-time perception of terrain and obstacle features is a vital requirement for such missions. As part of the ONR 30 Autonomy Team, the Jet Propulsion Laboratory developed a passive, low-cost night-time perception system under the ONR Expeditionary Maneuver Warfare and Combating Terrorism Applied Research program. Using a stereo pair of forward looking LWIR uncooled microbolometer cameras, the perception system generates disparity maps using a local window-based stereo correlator to achieve real-time performance while maintaining low power consumption. To overcome the lower signal-to-noise ratio and spatial resolution of LWIR thermal imaging technologies, a series of pre-filters were applied to the input images to increase the image contrast and stereo correlator enhancements were applied to increase the disparity density. To overcome false positives generated by mixed pixels, noisy disparities from repeated textures, and uncertainty in far range measurements, a series of consistency, multi-resolution, and temporal based post-filters were employed to improve the fidelity of the output range measurements. The stereo processing leverages multi-core processors and runs under the Robot Operating System (ROS). The night-time passive perception system was tested and evaluated on fully autonomous testbed ground vehicles at SPAWAR Systems Center Pacific (SSC Pacific) and Marine Corps Base Camp Pendleton, California. This paper describes the challenges, techniques, and experimental results of developing a passive, low-cost perception system for night-time autonomous navigation.

  5. Statistical analysis of data and modeling of Nanodust measured by STEREO/WAVES at 1AU

    NASA Astrophysics Data System (ADS)

    Belheouane, S.; Zaslavsky, A.; Meyer-Vernet, N.; Issautier, K.; Czechowski, A.; Mann, I.; Le Chat, G.; Zouganelis, I.; Maksimovic, M.

    2012-12-01

    We study the flux of dust particles of nanometer size measured at 1AU by the S/WAVES instrument aboard the twin STEREO spacecraft. When they impact the spacecraft at very high speed, these nanodust particles, first detected by Meyer-Vernet et al. (2009), generate plasma clouds and produce voltage pulses measured by the electric antennas. The Time Domain Sampler (TDS) of the radio and plasma instrument produces temporal windows containing several pulses. We perform a statistical study of the distribution of pulse amplitudes and arrival times in the measuring window during the 2007-2012 period. We interpret the results using simulations of the dynamics of nanodust in the solar wind based on the model of Czechowski and Mann (2010). We also investigate the variations of nanodust fluxes while STEREO rotates about the sunward axis (Roll) ; this reveals that some directions are privilegied.

  6. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  7. Temporal integration property of stereopsis after higher-order aberration correction

    PubMed Central

    Kang, Jian; Dai, Yun; Zhang, Yudong

    2015-01-01

    Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010

  8. Stereo particle image velocimetry set up for measurements in the wake of scaled wind turbines

    NASA Astrophysics Data System (ADS)

    Campanardi, Gabriele; Grassi, Donato; Zanotti, Alex; Nanos, Emmanouil M.; Campagnolo, Filippo; Croce, Alessandro; Bottasso, Carlo L.

    2017-08-01

    Stereo particle image velocimetry measurements were carried out in the boundary layer test section of Politecnico di Milano large wind tunnel to survey the wake of a scaled wind turbine model designed and developed by Technische Universität München. The stereo PIV instrumentation was set up to survey the three velocity components on cross-flow planes at different longitudinal locations. The area of investigation covered the entire extent of the wind turbines wake that was scanned by the use of two separate traversing systems for both the laser and the cameras. Such instrumentation set up enabled to gain rapidly high quality results suitable to characterise the behaviour of the flow field in the wake of the scaled wind turbine. This would be very useful for the evaluation of the performance of wind farm control methodologies based on wake redirection and for the validation of CFD tools.

  9. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  10. Classification of road sign type using mobile stereo vision

    NASA Astrophysics Data System (ADS)

    McLoughlin, Simon D.; Deegan, Catherine; Fitzgerald, Conor; Markham, Charles

    2005-06-01

    This paper presents a portable mobile stereo vision system designed for the assessment of road signage and delineation (lines and reflective pavement markers or "cat's eyes"). This novel system allows both geometric and photometric measurements to be made on objects in a scene. Global Positioning System technology provides important location data for any measurements made. Using the system it has been shown that road signs can be classfied by nature of their reflectivity. This is achieved by examining the changes in the reflected light intensity with changes in range (facilitated by stereo vision). Signs assessed include those made from retro-reflective materials, those made from diffuse reflective materials and those made from diffuse reflective matrials with local illumination. Field-testing results demonstrate the systems ability to classify objects in the scene based on their reflective properties. The paper includes a discussion of a physical model that supports the experimental data.

  11. Study of Small-Scale Anisotropy of Ultra-High-Energy Cosmic Rays Observed in Stereo by the High Resolution Fly's Eye Detector

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; BenZvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; HIRES Collaboration

    2004-08-01

    The High Resolution Fly's Eye (HiRes) experiment is an air fluorescence detector which, operating in stereo mode, has a typical angular resolution of 0.6d and is sensitive to cosmic rays with energies above 1018 eV. The HiRes cosmic-ray detector is thus an excellent instrument for the study of the arrival directions of ultra-high-energy cosmic rays. We present the results of a search for anisotropies in the distribution of arrival directions on small scales (<5°) and at the highest energies (>1019 eV). The search is based on data recorded between 1999 December and 2004 January, with a total of 271 events above 1019 eV. No small-scale anisotropy is found, and the strongest clustering found in the HiRes stereo data is consistent at the 52% level with the null hypothesis of isotropically distributed arrival directions.

  12. Developments in analytical instrumentation

    NASA Astrophysics Data System (ADS)

    Petrie, G.

    The situation regarding photogrammetric instrumentation has changed quite dramatically over the last 2 or 3 years with the withdrawal of most analogue stereo-plotting machines from the market place and their replacement by analytically based instrumentation. While there have been few new developments in the field of comparators, there has been an explosive development in the area of small, relatively inexpensive analytical stereo-plotters based on the use of microcomputers. In particular, a number of new instruments have been introduced by manufacturers who mostly have not been associated previously with photogrammetry. Several innovative concepts have been introduced in these small but capable instruments, many of which are aimed at specialised applications, e.g. in close-range photogrammetry (using small-format cameras); for thematic mapping (by organisations engaged in environmental monitoring or resources exploitation); for map revision, etc. Another innovative and possibly significant development has been the production of conversion kits to convert suitable analogue stereo-plotting machines such as the Topocart, PG-2 and B-8 into fully fledged analytical plotters. The larger and more sophisticated analytical stereo-plotters are mostly being produced by the traditional mainstream photogrammetric systems suppliers with several new instruments and developments being introduced at the top end of the market. These include the use of enlarged photo stages to handle images up to 25 × 50 cm format; the complete integration of graphics workstations into the analytical plotter design; the introduction of graphics superimposition and stereo-superimposition; the addition of correlators for the automatic measurement of height, etc. The software associated with this new analytical instrumentation is now undergoing extensive re-development with the need to supply photogrammetric data as input to the more sophisticated G.I.S. systems now being installed by clients, instead of the data being used mostly in the digital mapping systems operated in-house by mapping organisations. These various new hardware and software developments are reported upon and analysed in this Invited Paper presented to ISPRS Commission II at the 1988 Kyoto Congress.

  13. Space-time measurements of oceanic sea states

    NASA Astrophysics Data System (ADS)

    Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice

    2013-10-01

    Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

  14. LROC Stereo Observations

    NASA Astrophysics Data System (ADS)

    Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.

    2009-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.

  15. Mastcam Stereo Analysis and Mosaics (MSAM)

    NASA Astrophysics Data System (ADS)

    Deen, R. G.; Maki, J. N.; Algermissen, S. S.; Abarca, H. E.; Ruoff, N. A.

    2017-06-01

    Describes a new PDART task that will generate stereo analysis products (XYZ, slope, etc.), terrain meshes, and mosaics (stereo, ortho, and Mast/Nav combos) for all MSL Mastcam images and deliver the results to PDS.

  16. BRDF invariant stereo using light transport constancy.

    PubMed

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  17. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  18. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    DTIC Science & Technology

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  19. A comparison of static near stereo acuity in youth baseball/softball players and non-ball players.

    PubMed

    Boden, Lauren M; Rosengren, Kenneth J; Martin, Daniel F; Boden, Scott D

    2009-03-01

    Although many aspects of vision have been investigated in professional baseball players, few studies have been performed in developing athletes. The issue of whether youth baseball players have superior stereopsis to nonplayers has not been addressed specifically. The purpose of this study was to determine if youth baseball/softball players have better stereo acuity than non-ball players. Informed consent was obtained from 51 baseball/softball players and 52 non-ball players (ages 10 to 18 years). Subjects completed a questionnaire, and their static near stereo acuity was measured using the Randot Stereotest (Stereo Optical Company, Chicago, Illinois). Stereo acuity was measured as the seconds of arc between the last pair of images correctly distinguished by the subject. The mean stereo acuity score was 25.5 +/- 1.7 seconds of arc in the baseball/softball players and 56.2 +/- 8.4 seconds of arc in the non-ball players. This difference was statistically significant (P < 0.00001). In addition, a perfect stereo acuity score of 20 seconds of arc was seen in 61% of the ball players and only 23% of the non-ball players (P = 0.0001). Youth baseball/softball players had significantly better static stereo acuity than non-ball players, comparable to professional ball players.

  20. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  1. Structure Sensor for mobile markerless augmented reality

    NASA Astrophysics Data System (ADS)

    Kilgus, T.; Bux, R.; Franz, A. M.; Johnen, W.; Heim, E.; Fangerau, M.; Müller, M.; Yen, K.; Maier-Hein, L.

    2016-03-01

    3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8+/-1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.

  2. 3D Modeling of CMEs observed with STEREO

    NASA Astrophysics Data System (ADS)

    Bosman, E.; Bothmer, V.

    2012-04-01

    From January 2007 until end of 2010, 565 typical large-scale coronal mass ejections (CMEs) have been identified in the SECCHI/COR2 synoptic movies of the STEREO Mission. A subset comprising 114 CME events, selected based on the CME's brightness appearance in the SECCHI/COR2 images, has been modeled through the Graduated Cylindrical Shell (GCS) Model developed by Thernisien et al. (2006). This study presents an overview of the GCS forward-modeling results and an interpretation of the CME characteristics in relationship to their solar source region properties and solar cycle appearances.

  3. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  4. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  5. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  6. KSC-06pd2272

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers check the clearance of the STEREO spacecraft as it is moved away from the opening. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  7. KSC-06pd2266

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted off its transporter alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  8. KSC-06pd2268

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted up toward the platform on the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  9. KSC-06pd2269

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Viewed from inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers watch the progress of the STEREO spacecraft being lifted. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  10. KSC-06pd2270

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers begin maneuvering the STEREO spacecraft into the mobile service tower. Once in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  11. KSC-06pd2271

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - On Launch Pad 17-B at Cape Canaveral Air Force Station, workers observe the progress of the STEREO spacecraft as it glides inside the mobile service tower. After it is in the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  12. KSC-06pd2267

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Against a pre-dawn sky on Launch Pad 17-B at Cape Canaveral Air Force Station, the STEREO spacecraft is lifted alongside the mobile service tower. In the tower, STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  13. KSC-06pd2264

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft waits for a crane to be fitted over it and be lifted into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  14. KSC-06pd2265

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - After arriving at Launch Pad 17-B on Cape Canaveral Air Force Station, the STEREO spacecraft is fitted with a crane to lift it into the mobile service tower. STEREO will be mated with its launch vehicle, a Boeing Delta II rocket. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  15. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  16. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  17. Sensor fusion of phase measuring profilometry and stereo vision for three-dimensional inspection of electronic components assembled on printed circuit boards.

    PubMed

    Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il

    2009-07-20

    Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.

  18. Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs

    NASA Astrophysics Data System (ADS)

    Buder, Maximilian

    2012-06-01

    This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.

  19. Forest abovegroundbiomass mapping using spaceborne stereo imagery acquired by Chinese ZY-3

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ni, W.; Zhang, Z.; Xiong, C.

    2015-12-01

    Besides LiDAR data, another valuable type of data which is also directly sensitive to forest vertical structures and more suitable for regional mapping of forest biomass is the stereo imagery or photogrammetry. Photogrammetry is the traditional technique for deriving terrain elevation. The elevation of the top of a tree canopy can be directly measured from stereo imagery but winter images are required to get the elevation of ground surface because stereo images are acquired by optical sensors which cannot penetrate dense forest canopies with leaf-on condition. Several spaceborne stereoscopic systems with higher spatial resolutions have been launched in the past several years. For example the Chinese satellite Zi Yuan 3 (ZY-3) specifically designed for the collection of stereo imagery with a resolution of 3.6 m for forward and backward views and 2.1 m for the nadir view was launched on January 9, 2012. Our previous studies have demonstrated that the spaceborne stereo imagery acquired in summer has good performance on the description of forest structures. The ground surface elevation could be extracted from spaceborne stereo imagery acquired in winter. This study mainly focused on assessing the mapping of forest biomass through the combination of spaceborne stereo imagery acquired in summer and those in winter. The test sites of this study located at Daxing AnlingMountains areas as shown in Fig.1. The Daxing Anling site is on the south border of boreal forest belonging to frigid-temperate zone coniferous forest vegetation The dominant tree species is Dhurian larch (Larix gmelinii). 10 scenes of ZY-3 stereo images are used in this study. 5 scenes were acquired on March 14,2012 while the other 5 scenes were acquired on September 7, 2012. Their spatial coverage is shown in Fig.2-a. Fig.2-b is the mosaic of nadir images acquired on 09/07/2012 while Fig.2-c is thecorresponding digital surface model (DSM) derived from stereo images acquired on 09/07/2012. Fig.2-d is the difference between the DSM derived from stereo imagery acquired on 09/07/2012 and the digital elevation model (DEM) from stereo imagery acquired on 03/14/2012.The detailed analysis will be given in the final report.

  20. Real-time handling of existing content sources on a multi-layer display

    NASA Astrophysics Data System (ADS)

    Singh, Darryl S. K.; Shin, Jung

    2013-03-01

    A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time

  1. A STEREO Survey of Magnetic Cloud Coronal Mass Ejections Observed at Earth in 2008–2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Brian E.; Wu, Chin-Chun; Howard, Russell A.

    We identify coronal mass ejections (CMEs) associated with magnetic clouds (MCs) observed near Earth by the Wind spacecraft from 2008 to mid-2012, a time period when the two STEREO spacecraft were well positioned to study Earth-directed CMEs. We find 31 out of 48 Wind MCs during this period can be clearly connected with a CME that is trackable in STEREO imagery all the way from the Sun to near 1 au. For these events, we perform full 3D reconstructions of the CME structure and kinematics, assuming a flux rope (FR) morphology for the CME shape, considering the full complement ofmore » STEREO and SOHO imaging constraints. We find that the FR orientations and sizes inferred from imaging are not well correlated with MC orientations and sizes inferred from the Wind data. However, velocities within the MC region are reproduced reasonably well by the image-based reconstruction. Our kinematic measurements are used to provide simple prescriptions for predicting CME arrival times at Earth, provided for a range of distances from the Sun where CME velocity measurements might be made. Finally, we discuss the differences in the morphology and kinematics of CME FRs associated with different surface phenomena (flares, filament eruptions, or no surface activity).« less

  2. An efficient photogrammetric stereo matching method for high-resolution images

    NASA Astrophysics Data System (ADS)

    Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao

    2016-12-01

    Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.

  3. STEREO Mission Design

    NASA Technical Reports Server (NTRS)

    Dunham, David W.; Guzman, Jose J.; Sharer, Peter J.; Friessen, Henry D.

    2007-01-01

    STEREO (Solar-TErestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA). STEREO is the first mission to utilize phasing loops and multiple lunar flybys to alter the trajectories of more than one satellite. This paper describes the launch computation methodology, the launch constraints, and the resulting nine launch windows that were prepared for STEREO. More details are provided for the window in late October 2006 that was actually used.

  4. Stereo Science Update

    NASA Image and Video Library

    2009-04-13

    Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, makes a point during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathkurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)

  5. KSC-06pd1153

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" is secure on the stand. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  6. Terminator Disparity Contributes to Stereo Matching for Eye Movements and Perception

    PubMed Central

    Optican, Lance M.; Cumming, Bruce G.

    2013-01-01

    In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity. PMID:24285893

  7. Terminator disparity contributes to stereo matching for eye movements and perception.

    PubMed

    Quaia, Christian; Optican, Lance M; Cumming, Bruce G

    2013-11-27

    In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity.

  8. Characterization of crosstalk in stereoscopic display devices.

    PubMed

    Zafar, Fahad; Badano, Aldo

    2014-12-01

    Many different types of stereoscopic display devices are used for commercial and research applications. Stereoscopic displays offer the potential to improve performance in detection tasks for medical imaging diagnostic systems. Due to the variety of stereoscopic display technologies, it remains unclear how these compare with each other for detection and estimation tasks. Different stereo devices have different performance trade-offs due to their display characteristics. Among them, crosstalk is known to affect observer perception of 3D content and might affect detection performance. We measured and report the detailed luminance output and crosstalk characteristics for three different types of stereoscopic display devices. We recorded the effect of other issues on recorded luminance profiles such as viewing angle, use of different eye wear, and screen location. Our results show that the crosstalk signature for viewing 3D content can vary considerably when using different types of 3D glasses for active stereo displays. We also show that significant differences are present in crosstalk signatures when varying the viewing angle from 0 degrees to 20 degrees for a stereo mirror 3D display device. Our detailed characterization can help emulate the effect of crosstalk in conducting computational observer image quality assessment evaluations that minimize costly and time-consuming human reader studies.

  9. Image-size differences worsen stereopsis independent of eye position

    PubMed Central

    Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.

    2010-01-01

    With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927

  10. Using digital photogrammetry to constrain the segmentation of Paleocene volcanic marker horizons within the Nuussuaq basin

    NASA Astrophysics Data System (ADS)

    Vest Sørensen, Erik; Pedersen, Asger Ken

    2017-04-01

    Digital photogrammetry is used to map important volcanic marker horizons within the Nuussuaq Basin, West Greenland. We use a combination of oblique stereo images acquired from helicopter using handheld cameras and traditional aerial photographs. The oblique imagery consists of scanned stereo photographs acquired with analogue cameras in the 90´ties and newer digital images acquired with high resolution digital consumer cameras. Photogrammetric software packages SOCET SET and 3D Stereo Blend are used for controlling the seamless movement between stereo-models at different scales and viewing angles and the mapping is done stereoscopically using 3d monitors and the human stereopsis. The approach allows us to map in three dimensions three characteristic marker horizons (Tunoqqu, Kûgánguaq and Qordlortorssuaq Members) within the picritic Vaigat Formation. They formed toward the end of the same volcanic episode and are believed to be closely related in time. They formed an approximately coherent sub-horizontal surface, the Tunoqqu Surface that at the time of formation covered more than 3100 km2 on Disko and Nuussuaq. Our mapping shows that the Tunoqqu Surface is now segmented into areas of different elevation and structural trend as a result of later tectonic deformation. This is most notable on Nuussuaq where the western part is elevated and in parts highly faulted. In western Nuussuaq the surface has been uplifted and faulted so that it now forms an asymmetric anticline. The flanks of the anticline are coincident with two N-S oriented pre-Tunoqqu extensional faults. The deformation of the Tunoqqu surface could be explained by inversion of older extensional faults due to an overall E-W directed compressive regime in the late Paleocene.

  11. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) Software, Beta Release W93e

    DTIC Science & Technology

    2009-09-01

    Hazard Assessment Algorithm for Humans (AHAAH) Does The AHAAH is an electro- acoustic model of the ear used to evaluate the hazard of impulse sounds...format is commonly used for recording music ; thus, these are typically stereo files and contain a “right” and a “left” channel as well as a header... acoustic data (sometimes deliberately induced in recording to maximize the digitizer’s dynamic range), it must be removed. When Set Baseline is

  12. Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Zuo, C.; Xiao, X.; Hou, Q.; Li, B.

    2018-05-01

    WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.

  13. Stereoscopic observations from meteorological satellites

    NASA Astrophysics Data System (ADS)

    Hasler, A. F.; Mack, R.; Negri, A.

    The capability of making stereoscopic observations of clouds from meteorological satellites is a new basic analysis tool with a broad spectrum of applications. Stereoscopic observations from satellites were first made using the early vidicon tube weather satellites (e.g., Ondrejka and Conover [1]). However, the only high quality meteorological stereoscopy from low orbit has been done from Apollo and Skylab, (e.g., Shenk et al. [2] and Black [3], [4]). Stereoscopy from geosynchronous satellites was proposed by Shenk [5] and Bristor and Pichel [6] in 1974 which allowed Minzner et al. [7] to demonstrate the first quantitative cloud height analysis. In 1978 Bryson [8] and desJardins [9] independently developed digital processing techniques to remap stereo images which made possible precision height measurement and spectacular display of stereograms (Hasler et al. [10], and Hasler [11]). In 1980 the Japanese Geosynchronous Satellite (GMS) and the U.S. GOES-West satellite were synchronized to obtain stereo over the central Pacific as described by Fujita and Dodge [12] and in this paper. Recently the authors have remapped images from a Low Earth Orbiter (LEO) to the coordinate system of a Geosynchronous Earth Orbiter (GEO) and obtained stereoscopic cloud height measurements which promise to have quality comparable to previous all GEO stereo. It has also been determined that the north-south imaging scan rate of some GEOs can be slowed or reversed. Therefore the feasibility of obtaining stereoscopic observations world wide from combinations of operational GEO and LEO satellites has been demonstrated. Stereoscopy from satellites has many advantages over infrared techniques for the observation of cloud structure because it depends only on basic geometric relationships. Digital remapping of GEO and LEO satellite images is imperative for precision stereo height measurement and high quality displays because of the curvature of the earth and the large angular separation of the two satellites. A general solution for accurate height computation depends on precise navigation of the two satellites. Validation of the geosynchronous satellite stereo using high altitude mountain lakes and vertically pointing aircraft lidar leads to a height accuracy estimate of +/- 500 m for typical clouds which have been studied. Applications of the satellite stereo include: 1) cloud top and base height measurements, 2) cloud-wind height assignment, 3) vertical motion estimates for convective clouds (Mack et al. [13], [14]), 4) temperature vs. height measurements when stereo is used together with infrared observations and 5) cloud emissivity measurements when stereo, infrared and temperature sounding are used together (see Szejwach et al. [15]). When true satellite stereo image pairs are not available, synthetic stereo may be generated. The combination of multispectral satellite data using computer produced stereo image pairs is a dramatic example of synthetic stereoscopic display. The classic case uses the combination of infrared and visible data as first demonstrated by Pichel et al. [16]. Hasler et at. [17], Mosher and Young [18] and Lorenz [19], have expanded this concept to display many channels of data from various radiometers as well as real and simulated data fields. A future system of stereoscopic satellites would be comprised of both low orbiters (as suggested by Lorenz and Schmidt [20], [19]) and a global system of geosynchronous satellites. The low earth orbiters would provide stereo coverage day and night and include the poles. An optimum global system of stereoscopic geosynchronous satellites would require international standarization of scan rate and direction, and scan times (synchronization) and resolution of at least 1 km in all imaging channels. A stereoscopic satellite system as suggested here would make an extremely important contribution to the understanding and prediction of the atmosphere.

  14. SU-G-BRA-05: Application of a Feature-Based Tracking Algorithm to KV X-Ray Fluoroscopic Images Toward Marker-Less Real-Time Tumor Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, M; Matsuo, Y; Mukumoto, N

    Purpose: To detect target position on kV X-ray fluoroscopic images using a feature-based tracking algorithm, Accelerated-KAZE (AKAZE), for markerless real-time tumor tracking (RTTT). Methods: Twelve lung cancer patients treated with RTTT on the Vero4DRT (Mitsubishi Heavy Industries, Japan, and Brainlab AG, Feldkirchen, Germany) were enrolled in this study. Respiratory tumor movement was greater than 10 mm. Three to five fiducial markers were implanted around the lung tumor transbronchially for each patient. Before beam delivery, external infrared (IR) markers and the fiducial markers were monitored for 20 to 40 s with the IR camera every 16.7 ms and with an orthogonalmore » kV x-ray imaging subsystem every 80 or 160 ms, respectively. Target positions derived from the fiducial markers were determined on the orthogonal kV x-ray images, which were used as the ground truth in this study. Meanwhile, tracking positions were identified by AKAZE. Among a lot of feature points, AKAZE found high-quality feature points through sequential cross-check and distance-check between two consecutive images. Then, these 2D positional data were converted to the 3D positional data by a transformation matrix with a predefined calibration parameter. Root mean square error (RMSE) was calculated to evaluate the difference between 3D tracking and target positions. A total of 393 frames was analyzed. The experiment was conducted on a personal computer with 16 GB RAM, Intel Core i7-2600, 3.4 GHz processor. Results: Reproducibility of the target position during the same respiratory phase was 0.6 +/− 0.6 mm (range, 0.1–3.3 mm). Mean +/− SD of the RMSEs was 0.3 +/− 0.2 mm (range, 0.0–1.0 mm). Median computation time per frame was 179 msec (range, 154–247 msec). Conclusion: AKAZE successfully and quickly detected the target position on kV X-ray fluoroscopic images. Initial results indicate that the differences between 3D tracking and target position would be clinically acceptable.« less

  15. Stereo Science Update

    NASA Image and Video Library

    2009-04-13

    Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, makes a comment during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, second from left, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathakurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)

  16. Stereo Science Update

    NASA Image and Video Library

    2009-04-13

    Angelo Vourlidas, project scientist, Sun Earth Connection Coronal and Heliospheric Investigation, at the Naval Research Laboratory, second from left, makes a comment during a Science Update on the STEREO mission at NASA Headquarters in Washington, Tuesday, April 14, 2009, as Michael Kaiser, project scientist, Solar Terrestrial Relations Observatory (STEREO) at Goddard Space Flight Center, left, Toni Galvin, principal investigator, Plasma and Superthermal Ion Composition instrument at the University of New Hampshire and Madhulika Guhathakurta, STEREO program scientist, right, look on. Photo Credit: (NASA/Paul E. Alers)

  17. KSC-06pd1150

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" as it is lifted off a tilt table. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  18. KSC-06pd1148

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the STEREO spacecraft "B" is being moved to a another stand nearby for testing. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  19. KSC-06pd1152

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" as it is lowered toward a stand on the floor. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  20. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  1. Preparing WIND for the STEREO Mission

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.

    2006-05-01

    The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  2. Terrain Perception for DEMO III

    NASA Technical Reports Server (NTRS)

    Manduchi, R.; Bellutta, P.; Matthies, L.; Owens, K.; Rankin, A.

    2000-01-01

    The Demo III program has as its primary focus the development of autonomous mobility for a small rugged cross country vehicle. In this paper we report recent progress on both stereo-based obstacle detection and terrain cover color-based classification.

  3. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  4. HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.

    PubMed

    Lin, Huei-Yung; Wang, Min-Liang

    2014-09-04

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.

  5. HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots

    PubMed Central

    Lin, Huei-Yung.; Wang, Min-Liang.

    2014-01-01

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317

  6. Venus surface roughness and Magellan stereo data

    NASA Technical Reports Server (NTRS)

    Maurice, Kelly E.; Leberl, Franz W.; Norikane, L.; Hensley, Scott

    1994-01-01

    Presented are results of some studies to develop tools useful for the analysis of Venus surface shape and its roughness. Actual work was focused on Maxwell Montes. The analyses employ data acquired by means of NASA's Magellan satellite. The work is primarily concerned with deriving measurements of the Venusian surface using Magellan stereo SAR. Roughness was considered by means of a theoretical analyses based on digital elevation models (DEM's), on single Magellan radar images combined with radiometer data, and on the use of multiple overlapping Magellan radar images from cycles 1, 2, and 3, again combined with collateral radiometer data.

  7. Predicting Long-Range Traversability from Short-Range Stereo-Derived Geometry

    NASA Technical Reports Server (NTRS)

    Turmon, Michael; Tang, Benyang; Howard, Andrew; Brjaracharya, Max

    2010-01-01

    Based only on its appearance in imagery, this program uses close-range 3D terrain analysis to produce training data sufficient to estimate the traversability of terrain beyond 3D sensing range. This approach is called learning from stereo (LFS). In effect, the software transfers knowledge from middle distances, where 3D geometry provides training cues, into the far field where only appearance is available. This is a viable approach because the same obstacle classes, and sometimes the same obstacles, are typically present in the mid-field and the farfield. Learning thus extends the effective look-ahead distance of the sensors.

  8. STEREO in-situ data analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Luhmann, J.; Davis, A.; Russell, C.

    STEREO s IMPACT In-situ Measurements of Particles and CME Transients investigation provides the first opportunity for long duration detailed observations of 1 AU magnetic field structures plasma and suprathermal electrons and energetic particles at points bracketing Earth s heliospheric location The PLASTIC instrument will make plasma ion composition measurements completing STEREO s comprehensive in-situ perspective Stereoscopic 3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections ICME and solar wind structures to CMEs and coronal holes observed at the Sun The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission s full scientific potential An interactive browser with the ability to create publication-quality plots is being developed which will integrate STEREO s in-situ data with data from a variety of other missions including WIND and ACE Also an application program interface API will be provided allowing users to create custom software that ties directly into STEREO s data set The API will allow for more advanced forms of data mining than currently available through most data web services A variety of data access techniques and the development of cross-spacecraft data analysis tools will allow the larger scientific community to combine STEREO s unique in-situ data with those of other missions particularly the L1 missions and therefore to maximize

  9. Electron density measurements from the shot noise collected on the STEREO/WAVES antennas

    NASA Astrophysics Data System (ADS)

    Zouganelis, Ioannis; Bale, Stuart; Bougeret, J.-L.; Maksimovic, Milan

    One of the most reliable techniques for in situ measuring the electron density and temperature in space plasmas is the quasi-thermal noise spectroscopy. When a passive electric antenna is immersed in a stable plasma, the thermal motion of the ambient particles produces electrostatic fluctuations, which can be adequately measured with a sensitive wave receiver connected to a wire dipole antenna. Unfortunately, on STEREO, the S/WAVES design does not let us use this high accuracy technique because the antennas have a large surface area and the resulting shot noise spectrum in the solar wind dominates the power at lower frequencies. We can use, instead, the electron shot noise to infer the plasma density. For this, we use well calibrated Wind particle data to deduce the base capacitance of the S/WAVES instrument in a special configuration when the STEREO-B spacecraft was just downstream of Wind. The electron plasma density deduced is then compared to the S/PLASTIC ion density and its accuracy is estimated of up to 10

  10. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  11. A novel method of robot location using RFID and stereo vision

    NASA Astrophysics Data System (ADS)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  12. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    PubMed

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  13. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  14. Global Patch Matching

    NASA Astrophysics Data System (ADS)

    Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.

    2017-09-01

    This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.

  15. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  16. Ubiquitous Stereo Vision for Controlling Safety on Platforms in Railroad Station

    NASA Astrophysics Data System (ADS)

    Yoda, Ikushi; Hosotani, Daisuke; Sakaue, Katushiko

    Dozens of people are killed every year when they fall off of train platforms, making this an urgent issue to be addressed by the railroads, especially in the major cities. This concern prompted the present work that is now in progress to develop a Ubiquitous Stereo Vision based system for safety management at the edge of rail station platforms. In this approach, a series of stereo cameras are installed in a row on the ceiling that are pointed downward at the edge of the platform to monitor the disposition of people waiting for the train. The purpose of the system is to determine automatically and in real-time whether anyone or anything is in the danger zone at the very edge of the platform, whether anyone has actually fallen off the platform, or whether there is any sign of these things happening. The system could be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble.

  17. Stereomotion is processed by the third-order motion system: reply to comment on Three-systems theory of human visual motion perception: review and update

    NASA Astrophysics Data System (ADS)

    Lu, Zhong-Lin; Sperling, George

    2002-10-01

    Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America

  18. SU-D-207-01: Markerless Respiratory Motion Tracking with Contrast Enhanced Thoracic Cone Beam CT Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, M; Yuan, Y; Rosenzweig, K

    2015-06-15

    Purpose: To develop a novel technique to enhance the image contrast of clinical cone beam CT projections and extract respiratory signals based on anatomical motion using the modified Amsterdam Shroud (AS) method to benefit image guided radiation therapy. Methods: Thoracic cone beam CT projections acquired prior to treatment were preprocessed to increase their contrast for better respiratory signal extraction. Air intensity on raw images was firstly estimated and then applied to correct the projections to generate new attenuation images that were subsequently improved with deeper anatomy feature enhancement through taking logarithm operation, derivative along superior-inferior direction, respectively. All pixels onmore » individual post-processed two dimensional images were horizontally summed to one column and all projections were combined side by side to create an AS image from which patient’s respiratory signal was extracted. The impact of gantry rotation on the breathing signal rendering was also investigated. Ten projection image sets from five lung cancer patients acquired with the Varian Onboard Imager on 21iX Clinac (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Results: Application of the air correction on raw projections showed that more than an order of magnitude of contrast enhancement was achievable. The typical contrast on the raw projections is around 0.02 while that on attenuation images could greater than 0.5. Clear and stable breathing signal can be reliably extracted from the new images while the uncorrected projection sets failed to yield clear signals most of the time. Conclusion: Anatomy feature plays a key role in yielding breathing signal from the projection images using the AS technique. The air correction process facilitated the contrast enhancement significantly and attenuation images thus obtained provides a practical solution to obtaining markerless breathing motion tracking.« less

  19. WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Huang, C; Keall, P

    2015-06-15

    Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less

  20. Carbon-Ion Pencil Beam Scanning Treatment With Gated Markerless Tumor Tracking: An Analysis of Positional Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Shinichiro, E-mail: shinshin@nirs.go.jp; Karube, Masataka; Shirai, Toshiyuki

    Purpose: Having implemented amplitude-based respiratory gating for scanned carbon-ion beam therapy, we sought to evaluate its effect on positional accuracy and throughput. Methods and Materials: A total of 10 patients with tumors of the lung and liver participated in the first clinical trials at our center. Treatment planning was conducted with 4-dimensional computed tomography (4DCT) under free-breathing conditions. The planning target volume (PTV) was calculated by adding a 2- to 3-mm setup margin outside the clinical target volume (CTV) within the gating window. The treatment beam was on when the CTV was within the PTV. Tumor position was detected inmore » real time with a markerless tumor tracking system using paired x-ray fluoroscopic imaging units. Results: The patient setup error (mean ± SD) was 1.1 ± 1.2 mm/0.6 ± 0.4°. The mean internal gating accuracy (95% confidence interval [CI]) was 0.5 mm. If external gating had been applied to this treatment, the mean gating accuracy (95% CI) would have been 4.1 mm. The fluoroscopic radiation doses (mean ± SD) were 23.7 ± 21.8 mGy per beam and less than 487.5 mGy total throughout the treatment course. The setup, preparation, and irradiation times (mean ± SD) were 8.9 ± 8.2 min, 9.5 ± 4.6 min, and 4.0 ± 2.4 min, respectively. The treatment room occupation time was 36.7 ± 67.5 min. Conclusions: Internal gating had a much higher accuracy than external gating. By the addition of a setup margin of 2 to 3 mm, internal gating positional error was less than 2.2 mm at 95% CI.« less

  1. Color-encoded distance for interactive focus positioning in laser microsurgery

    NASA Astrophysics Data System (ADS)

    Schoob, Andreas; Kundrat, Dennis; Lekon, Stefan; Kahrs, Lüder A.; Ortmaier, Tobias

    2016-08-01

    This paper presents a real-time method for interactive focus positioning in laser microsurgery. Registration of stereo vision and a surgical laser is performed in order to combine surgical scene and laser workspace information. In particular, stereo image data is processed to three-dimensionally reconstruct observed tissue surface as well as to compute and to highlight its intersection with the laser focal range. Regarding the surgical live view, three augmented reality concepts are presented providing visual feedback during manual focus positioning. A user study is performed and results are discussed with respect to accuracy and task completion time. Especially when using color-encoded distance superimposed to the live view, target positioning with sub-millimeter accuracy can be achieved in a few seconds. Finally, transfer to an intraoperative scenario with endoscopic human in vivo and cadaver images is discussed demonstrating the applicability of the image overlay in laser microsurgery.

  2. Photometric stereo endoscopy.

    PubMed

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S; Vakoc, Benjamin J; Durr, Nicholas J

    2013-07-01

    While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.

  3. Accuracy and reliability of coronal and sagittal spinal curvature data based on patient-specific three-dimensional models created by the EOS 2D/3D imaging system.

    PubMed

    Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás

    2012-11-01

    Three-dimensional (3D) deformations of the spine are predominantly characterized by two-dimensional (2D) angulation measurements in coronal and sagittal planes, using anteroposterior and lateral X-ray images. For coronal curves, a method originally described by Cobb and for sagittal curves a modified Cobb method are most widely used in practice, and these methods have been shown to exhibit good-to-excellent reliability and reproducibility, carried out either manually or by computer-based tools. Recently, an ultralow radiation dose-integrated radioimaging solution was introduced with special software for realistic 3D visualization and parametric characterization of the spinal column. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and sterEOS 3D measurements in a routine clinical setting. Retrospective nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4° and 117.5°. Analysis of accuracy and reliability of measurements were carried out on a group of all patients and in subgroups based on coronal plane deviation: 0° to 10° (Group 1, n=36), 10° to 25° (Group 2, n=25), 25° to 50° (Group 3, n=69), 50° to 75° (Group 4, n=49), and more than 75° (Group 5, n=22). Coronal and sagittal curvature measurements were determined by three experienced examiners, using either traditional 2D methods or automatic measurements based on sterEOS 3D reconstructions. Manual measurements were performed three times, and sterEOS 3D reconstructions and automatic measurements were performed two times by each examiner. Means comparison t test, Pearson bivariate correlation analysis, reliability analysis by intraclass correlation coefficients for intraobserver reproducibility and interrater reliability were performed using SPSS v16.0 software (IBM Corp., Armonk, NY, USA). No funds were received in support of this work. No benefits in any form have been or will be received from a commercial party related directly or indirectly to the subject of this article. In comparison with manual 2D methods, only small and nonsignificant differences were detectable in sterEOS 3D-based curvature data. Intraobserver reliability was excellent for both methods, and interrater reproducibility was consistently higher for sterEOS 3D methods that was found to be unaffected by the magnitude of coronal curves or sagittal plane deviations. This is the first clinical report on EOS 2D/3D system (EOS Imaging, Paris, France) and its sterEOS 3D software, documenting an excellent capability for accurate, reliable, and reproducible spinal curvature measurements. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Using terrestrial stereo photography to interpret changes in tree quality characteristics

    Treesearch

    David L. Sonderman

    1980-01-01

    A technique is described for using stereo photography to evaluate tree quality changes over time. Stereo pairs were taken four times over an 18-year period. All four faces of the selected trees were photographed. Individual defect changes are shown for young upland white oak trees.

  5. KSC-06pd1149

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., a technician works a guideline to the overhead crane as the STEREO spacecraft "B" is being moved to a stand nearby for testing. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  6. KSC-06pd1553

    NASA Image and Video Library

    2006-07-13

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B on Cape Canaveral Air Force Station, workers help guide the Boeing Delta II second stage for the STEREO launch onto the first stage for mating. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off in August 2006. Photo credit: NASA/George Shelton

  7. KSC-06pd1151

    NASA Image and Video Library

    2006-06-16

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians check the STEREO spacecraft "B" as it moves away from a tilt table (at right). The spacecraft will be placed on another stand nearby. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket on July 22. Photo credit: NASA/George Shelton

  8. Type II Radio Bursts Observed by STEREO/Waves and Wind/Waves instruments

    NASA Astrophysics Data System (ADS)

    Krupar, V.; Magdalenic, J.; Zhukov, A.; Rodriguez, L.; Mierla, M.; Maksimovic, M.; Cecconi, B.; Santolik, O.

    2013-12-01

    Type II radio bursts are slow-drift emissions triggered by suprathermal electrons accelerated on shock fronts of propagating CMEs. We present several events at kilometric wavelengths observed by radio instruments onboard the STEREO and Wind spacecraft. The STEREO/Waves and Wind/Waves have goniopolarimetric (GP, also referred to as direction finding) capabilities that allow us to triangulate radio sources when an emission is observed by two or more spacecraft. As the GP inversion has high requirements on the signal-to-noise ratio we only have a few type II radio bursts with sufficient intensity for this analysis. We have compared obtained radio sources with white-light observations of STEREO/COR and STEREO/HI instruments. Our preliminary results indicate that radio sources are located at flanks of propagating CMEs.

  9. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  10. GLD100 - Lunar topography from LROC WAC stereo

    NASA Astrophysics Data System (ADS)

    Scholten, F.; Oberst, J.; Robinson, M. S.

    2011-10-01

    The LROC WAC instrument of the LRO mission comprises substantial stereo image data from adjacent orbits. Multiple coverage of the entire surface of the Moon at a mean ground scale of 75 m/pxl has already been achieved within the first two years of the mission. We applied photogrammetric stereo processing methods for the derivation of a 100 m raster DTM (digital terrain model), called GLD100, from several tens of thousands stereo models. The GLD100 covers the lunar surface between 80° northern and southern latitude. Polar regions are excluded because of poor illumination and stereo conditions. Vertical differences of the GLD100 to altimetry data from the LRO LOLA instrument are small, the mean deviation is typically about 20 m, without systematic lateral or vertical offsets.

  11. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  12. Relation Between the 3D-Geometry of the Coronal Wave and Associated CME During the 26 April 2008 Event

    NASA Technical Reports Server (NTRS)

    Temmer, M.; Veronig, A. M.; Gopalswamy, N.; Yashiro, S.

    2011-01-01

    We study the kinematical characteristics and 3D geometry of a large-scale coronal wave that occurred in association with the 26 April 2008 flare-CME event. The wave was observed with the EUVI instruments aboard both STEREO spacecraft (STEREO-A and STEREO-B) with a mean speed of approx 240 km/s. The wave is more pronounced in the eastern propagation direction, and is thus, better observable in STEREO-B images. From STEREO-B observations we derive two separate initiation centers for the wave, and their locations fit with the coronal dimming regions. Assuming a simple geometry of the wave we reconstruct its 3D nature from combined STEREO-A and STEREO-B observations. We find that the wave structure is asymmetric with an inclination toward East. The associated CME has a deprojected speed of approx 750 +/- 50 km/s, and it shows a non-radial outward motion toward the East with respect to the underlying source region location. Applying the forward fitting model developed by Thernisien, Howard, and Vourlidas we derive the CME flux rope position on the solar surface to be close to the dimming regions. We conclude that the expanding flanks of the CME most likely drive and shape the coronal wave.

  13. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.

  14. Fusion of Building Information and Range Imaging for Autonomous Location Estimation in Indoor Environments

    PubMed Central

    Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.

    2013-01-01

    We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055

  15. Mapping snow depth from stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Gascoin, S.; Marti, R.; Berthier, E.; Houet, T.; de Pinel, M.; Laffly, D.

    2016-12-01

    To date, there is no definitive approach to map snow depth in mountainous areas from spaceborne sensors. Here, we examine the potential of very-high-resolution (VHR) optical stereo satellites to this purpose. Two triplets of 0.70 m resolution images were acquired by the Pléiades satellite over an open alpine catchment (14.5 km²) under snow-free and snow-covered conditions. The open-source software Ame's Stereo Pipeline (ASP) was used to match the stereo pairs without ground control points to generate raw photogrammetric clouds and to convert them into high-resolution digital elevation models (DEMs) at 1, 2, and 4 m resolutions. The DEM differences (dDEMs) were computed after 3-D coregistration, including a correction of a -0.48 m vertical bias. The bias-corrected dDEM maps were compared to 451 snow-probe measurements. The results show a decimetric accuracy and precision in the Pléiades-derived snow depths. The median of the residuals is -0.16 m, with a standard deviation (SD) of 0.58 m at a pixel size of 2 m. We compared the 2 m Pléiades dDEM to a 2 m dDEM that was based on a winged unmanned aircraft vehicle (UAV) photogrammetric survey that was performed on the same winter date over a portion of the catchment (3.1 km²). The UAV-derived snow depth map exhibits the same patterns as the Pléiades-derived snow map, with a median of -0.11 m and a SD of 0.62 m when compared to the snow-probe measurements. The Pléiades images benefit from a very broad radiometric range (12 bits), allowing a high correlation success rate over the snow-covered areas. This study demonstrates the value of VHR stereo satellite imagery to map snow depth in remote mountainous areas even when no field data are available. Based on this method we have initiated a multi-year survey of the peak snow depth in the Bassiès catchment.

  16. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  17. Implementation of Markerless Augmented Reality Technology Based on Android to Introduction Lontara in Marine Society

    NASA Astrophysics Data System (ADS)

    Jumarlis, Mila; Mirfan, Mirfan

    2018-05-01

    Local language learning had been leaving by people especially young people had affected technology advances so that involved lack of interest to learn culture especially local language. So required interactive and interest learning media for introduction Lontara. This research aims to design and implement augmented reality on introduction Lontara on mobile device especially android. Application of introduction Lontara based on Android was designed by Vuforia and Unity. Data collection method were observation, interview, and literature review. That data was analysed for being information. The system was designed by Unified Modeling Language (UML). The method used is a marker. The test result found that application of Augmented Reality on introduction Lontara based on Android could improve public interest for introducing local language particularly young people in learning about Lontara because of using technology. Application of introduction of Lontara based on Android used augmented reality occurred sound and how to write Lontara with animation. This application could be running without an internet connection, so that its used more efficient and could maximize from user.

  18. An Integrated Photogrammetric and Photoclinometric Approach for Pixel-Resolution 3d Modelling of Lunar Surface

    NASA Astrophysics Data System (ADS)

    Liu, W. C.; Wu, B.

    2018-04-01

    High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.

  19. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  20. A novel craniotomy simulation system for evaluation of stereo-pair reconstruction fidelity and tracking

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Conley, Rebekah H.; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2016-03-01

    Brain shift compensation using computer modeling strategies is an important research area in the field of image-guided neurosurgery (IGNS). One important source of available sparse data during surgery to drive these frameworks is deformation tracking of the visible cortical surface. Possible methods to measure intra-operative cortical displacement include laser range scanners (LRS), which typically complicate the clinical workflow, and reconstruction of cortical surfaces from stereo pairs acquired with the operating microscopes. In this work, we propose and demonstrate a craniotomy simulation device that permits simulating realistic cortical displacements designed to measure and validate the proposed intra-operative cortical shift measurement systems. The device permits 3D deformations of a mock cortical surface which consists of a membrane made of a Dragon Skin® high performance silicone rubber on which vascular patterns are drawn. We then use this device to validate our stereo pair-based surface reconstruction system by comparing landmark positions and displacements measured with our systems to those positions and displacements as measured by a stylus tracked by a commercial optical system. Our results show a 1mm average difference in localization error and a 1.2mm average difference in displacement measurement. These results suggest that our stereo-pair technique is accurate enough for estimating intra-operative displacements in near real-time without affecting the surgical workflow.

  1. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  2. The reconstruction of atomic co-ordinates from a protein stereo ribbon diagram when additional information for sufficient sidechain positions is available

    NASA Astrophysics Data System (ADS)

    Lopes de Oliveira, Paulo Sérgio; Garratt, Richard Charles

    1998-11-01

    We describe the application of a method for the reconstruction of three-dimensional atomic co-ordinates from a stereo ribbon diagram of a protein when additional information for some of the sidechain positions is available. The method has applications in cases where the 3D co-ordinates have not been made available by any means other than the original publication and are of interest as models for molecular replacement, homology modelling etc. The approach is, on the one hand, more general than other methods which are based on stereo figures which present specific atomic positions, but on the other hand relies on input from a specialist. Its exact implementation will depend on the figure of interest. We have applied the method to the case of the α-d-galactose-binding lectin jacalin with a resultant RMS deviation, compared to the crystal structure, of 1.5 Å for the 133 Cα positions of the α-chain and 2.6 Å for the less regular β-chain. The success of the method depends on the secondary structure of the protein under consideration and the orientation of the stereo diagram itself but can be expected to reproduce the mainchain co-ordinates more accurately than the sidechains. Some ways in which the method may be generalised to other cases are discussed.

  3. The application of heliospheric imaging to space weather operations: Lessons learned from published studies

    NASA Astrophysics Data System (ADS)

    Harrison, Richard A.; Davies, Jackie A.; Biesecker, Doug; Gibbs, Mark

    2017-08-01

    The field of heliospheric imaging has matured significantly over the last 10 years—corresponding, in particular, to the launch of NASA's STEREO mission and the successful operation of the heliospheric imager (HI) instruments thereon. In parallel, this decade has borne witness to a marked increase in concern over the potentially damaging effects of space weather on space and ground-based technological assets, and the corresponding potential threat to human health, such that it is now under serious consideration at governmental level in many countries worldwide. Hence, in a political climate that recognizes the pressing need for enhanced operational space weather monitoring capabilities most appropriately stationed, it is widely accepted, at the Lagrangian L1 and L5 points, it is timely to assess the value of heliospheric imaging observations in the context of space weather operations. To this end, we review a cross section of the scientific analyses that have exploited heliospheric imagery—particularly from STEREO/HI—and discuss their relevance to operational predictions of, in particular, coronal mass ejection (CME) arrival at Earth and elsewhere. We believe that the potential benefit of heliospheric images to the provision of accurate CME arrival predictions on an operational basis, although as yet not fully realized, is significant and we assert that heliospheric imagery is central to any credible space weather mission, particularly one located at a vantage point off the Sun-Earth line.

  4. Robust feature estimation by non-rigid hierarchical image registration and its application in disparity measurement

    NASA Astrophysics Data System (ADS)

    Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan

    2017-03-01

    Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.

  5. Velocity Measurements in Nasal Cavities by Means of Stereoscopic Piv - Preliminary Tests

    NASA Astrophysics Data System (ADS)

    Cozzi, Fabio; Felisati, Giovanni; Quadrio, Maurizio

    2017-08-01

    The prediction of detailed flow patterns in human nasal cavities using computational fluid dynamics (CFD) can provide essential information on the potential relationship between patient-specific geometrical characteristics of the nasal anatomy and health problems, and ultimately led to improved surgery. The complex flow structure and the intricate geometry of the nasal cavities make achieving such goals a challenge for CFD specialists. The need for experimental data to validate and improve the numerical simulations is particularly crucial. To this aim an experimental set-up based on Stereo PIV and a silicon phantom of nasal cavities have been designed and realized at Politecnico di Milano. This work describes the main features and challenges of the set-up along with some preliminary results.

  6. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.

  7. STEREO-IMPACT E/PO at NASA's Sun-Earth Day Event: Participation in Total Eclipse 2006 Webcast

    NASA Astrophysics Data System (ADS)

    Craig, N.; Peticolas, L. M.; Mendez, B. J.; Luhmann, J. G.; Higdon, R.

    2006-05-01

    The Solar Terrestrial Relations Observatory (STEREO) is planned for launch in late Summer 2006. STEREO will study the Sun with two spacecraft in orbit around the Sun moving on opposite sides of Earth. The primary science goal is to understand the nature of Coronal Mass Ejections (CMEs). This presentation will focus on one of the informal education efforts of our E/PO program for the IMPACT instrument suite aboard STEREO. We will share our participation in NASA's Sun-Earth Day event which is scheduled to coincide with a total solar eclipse in March and is titled In a Different Light. We will show how this live eclipse Webcast, which reaches thousands of science center attendees, can inspire the public to observe, understand and be part of the Sun-Earth-Moon system. We will present video clips of STEREO-IMPACT team members Janet Luhmann and Nahide Craig participating in the Exploratorium's live Webcast of the 2006 solar eclipse on location from Side, Turkey, and the experiences and remarks of the other STEREO scientist from the path of totality from Africa.

  8. A phase-based stereo vision system-on-a-chip.

    PubMed

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  9. Dig Hazard Assessment Using a Stereo Pair of Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Trebi-Ollennu, Ashitey

    2012-01-01

    This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each stereo pair of mast images.

  10. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  11. KSC-06pd2263

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - With a convoy of escorts, the STEREO spacecraft is transported to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  12. A Comparative Study of Radar Stereo and Interferometry for DEM Generation

    NASA Astrophysics Data System (ADS)

    Gelautz, M.; Paillou, P.; Chen, C. W.; Zebker, H. A.

    2004-06-01

    In this experiment, we derive and compare radar stereo and interferometric elevation models (DEMs) of a study site in Djibouti, East Africa. As test data, we use a Radarsat stereo pair and ERS-2 and Radarsat interferometric data. Comparison of the reconstructed DEMs with a SPOT reference DEM shows that in regions of high coherence the DEMs produced by interferometry are of much better quality than the stereo result. However, the interferometric error histograms also show some pronounced outliers due to decorrelation and phase unwrapping problems on forested mountain slopes. The more robust stereo result is able to capture the general terrain shape, but finer surface details are lost. A fusion experiment demonstrates that merging the stereoscopic and interferometric DEMs by utilizing coherence- derived weights can significantly improve the accuracy of the computed elevation maps.

  13. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    PubMed

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  14. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  15. KSC-06pd1552

    NASA Image and Video Library

    2006-07-13

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B on Cape Canaveral Air Force Station, workers stand by as the Boeing Delta II second stage for the STEREO launch is lowered onto the first stage for mating. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off in August 2006. Photo credit: NASA/George Shelton

  16. KSC-06pd1858

    NASA Image and Video Library

    2006-08-10

    KENNEDY SPACE CENTER, FLA. - Technicians inside the Astrotech facility in Titusville, Florida, move the STEREO spacecraft to the spin table. The twin observatories will undergo a spin test to check balance and alignment in preparation for flight. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  17. KSC-06pd1538

    NASA Image and Video Library

    2006-07-10

    KENNEDY SPACE CENTER, FLA. - In the hazardous processing facility at Astrotech Space Operations in Titusville, Fla., technicians check Observatory A before lifting onto a scale for weight measurements. The observatory is one of two in the STEREO spacecraft and later will be fueled. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket no earlier than Aug. 1. Photo credit: NASA/Jack Pfaller

  18. KSC-06pd1742

    NASA Image and Video Library

    2006-08-05

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, the second stage of the Boeing Delta II launch vehicle for the STEREO spacecraft is being remated with the Delta first stage. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31. Photo credit: NASA/Dimitri Gerondidakis

  19. KSC-06pd1887

    NASA Image and Video Library

    2006-08-18

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., workers check the fitting of the upper portion of the transportation canister onto the lower portion. The canister encases the STEREO spacecraft for its move to Launch Pad 17-B at Cape Canaveral Air Force Station. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off Aug. 31. Photo credit: NASA/Kim Shiflett

  20. KSC-06pd1854

    NASA Image and Video Library

    2006-08-10

    KENNEDY SPACE CENTER, FLA. - The STEREO spacecraft sits on a test stand inside the Astrotech facility in Titusville, Florida. The twin observatories will undergo a spin test to check balance and alignment in preparation for flight. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  1. KSC-06pd1541

    NASA Image and Video Library

    2006-07-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., technicians are ready to wrap more plastic around STEREO's Observatory B before its transfer to the hazardous processing facility where it will be weighed and fueled. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket no earlier than Aug. 1. Photo credit: NASA/George Shelton

  2. KSC-06pd1880

    NASA Image and Video Library

    2006-08-18

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., workers move the stand holding the STEREO spacecraft and upper stage booster. The entire configuration will be encased for the move to Launch Pad 17-B at Cape Canaveral Air Force Station. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off Aug. 31. Photo credit: NASA/Kim Shiflett

  3. KSC-06pd1883

    NASA Image and Video Library

    2006-08-18

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the upper portion of the transportation canister is lowered over the STEREO spacecraft (in front). When the entire configuration is encased, it will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off Aug. 31. Photo credit: NASA/Kim Shiflett

  4. KSC-06pd1886

    NASA Image and Video Library

    2006-08-18

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., workers help guide the upper portion of the transportation canister onto the lower portion. The canister encases the STEREO spacecraft for its move to Launch Pad 17-B at Cape Canaveral Air Force Station. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off Aug. 31. Photo credit: NASA/Kim Shiflett

  5. KSC-06pd1882

    NASA Image and Video Library

    2006-08-18

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the upper portion of the transportation canister is lowered over the STEREO spacecraft (in front). When the entire configuration is encased, it will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off Aug. 31. Photo credit: NASA/Kim Shiflett

  6. KSC-06pd1551

    NASA Image and Video Library

    2006-07-13

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B on Cape Canaveral Air Force Station, the Boeing Delta II second stage for the STEREO launch is lowered toward the first stage. The two stages will be mated. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off in August 2006. Photo credit: NASA/George Shelton

  7. Development and evaluation of low cost game-based balance rehabilitation tool using the Microsoft Kinect sensor.

    PubMed

    Lange, Belinda; Chang, Chien-Yen; Suma, Evan; Newman, Bradley; Rizzo, Albert Skip; Bolas, Mark

    2011-01-01

    The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.

  8. Monitoring tumor motion by real time 2D/3D registration during radiotherapy.

    PubMed

    Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang

    2012-02-01

    In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Stereo-Based Region-Growing using String Matching

    NASA Technical Reports Server (NTRS)

    Mandelbaum, Robert; Mintz, Max

    1995-01-01

    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.

  10. Modified Test Protocol Improves Sensitivity of the Stereo Fly Test.

    PubMed

    De La Cruz, Angie; Morale, Sarah E; Jost, Reed M; Kelly, Krista R; Birch, Eileen E

    2016-01-01

    Stereoacuity measurement is a common element of pediatric ophthalmic examinations. Although the Stereo Fly Test is routinely used to establish the presence of coarse stereopsis (3000 arcsecs), it often yields a false negative "pass" due to learned responses and non-stereoscopic cues. We developed and evaluated a modified Stereo Fly Test protocol aimed at increasing sensitivity, thus reducing false negatives. The Stereo Fly Test was administered according to manufacturer instructions to 321 children aged 3-12 years. Children with a "pass" outcome (n = 147) were re-tested wearing glasses fitted with polarizers of matching orientation for both eyes to verify that they were responding to stereoscopic cues (modified protocol). The response to the standard Stereo Fly Test was considered a false negative (pass) if the child still pinched above the plate after disparity cues were eliminated. Randot ® Preschool Stereoacuity and Butterfly Tests were used as gold standards. Sensitivity was 81% (95% CI: 0.75 - 0.86) for standard administration of the Stereo Fly Test (19% false negative "pass"). The modified protocol increased sensitivity to 90% (95% CI: 0.85 - 0.94). The modified two-step protocol is a simple and convenient way to administer the Stereo Fly Test with increased sensitivity in a clinical setting. © 2016 Board of regents of the University of Wisconsin System, American Orthoptic Journal, Volume 66, 2016, ISSN 0065-955X, E-ISSN 1553-4448.

  11. Extraction and textural characterization of above-ground areas from aerial stereo pairs: a quality assessment

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Dissard, O.; Jamet, O.; Maître, H.

    Above-ground analysis is a key point to the reconstruction of urban scenes, but it is a difficult task because of the diversity of the involved objects. We propose a new method to above-ground extraction from an aerial stereo pair, which does not require any assumption about object shape or nature. A Digital Surface Model is first produced by a stereoscopic matching stage preserving discontinuities, and then processed by a region-based Markovian classification algorithm. The produced above-ground areas are finally characterized as man-made or natural according to the grey level information. The quality of the results is assessed and discussed.

  12. FM Stereo and AM Stereo: Government Standard-Setting vs. the Marketplace.

    ERIC Educational Resources Information Center

    Huff, W. A. Kelly

    The emergence of frequency modulation or FM radio signals, which arose from the desire to free broadcasting of static noise common to amplitude modulation or AM, has produced the controversial development of stereo broadcasting. The resulting enhancement of sound quality helped FM pass AM in audience shares in less than two decades. The basic…

  13. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.

    1982-01-01

    The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.

  14. Left Limb of North Pole of the Sun, March 20, 2007 (Anaglyph)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: Left eye view of a stereo pair Click on the image for full resolution TIFF Figure 2: Right eye view of a stereo pair Click on the image for full resolution TIFF Figure 1: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun. This location enables us to view the Sun from the position of a virtual left eye in space. Figure 2: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-A spacecraft. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space.

    NASA's Solar TErrestrial RElations Observatory (STEREO) satellites have provided the first three-dimensional images of the Sun. For the first time, scientists will be able to see structures in the Sun's atmosphere in three dimensions. The new view will greatly aid scientists' ability to understand solar physics and thereby improve space weather forecasting.

    This image is a composite of left and right eye color image pairs taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B and STEREO-A spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual left eye in space. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space.

    The EUVI imager is sensitive to wavelengths of light in the extreme ultraviolet portion of the spectrum. EUVI bands at wavelengths of 304, 171 and 195 Angstroms have been mapped to the red blue and green visible portion of the spectrum; and processed to emphasize the three-dimensional structure of the solar material.

    STEREO, a two-year mission, launched October 2006, will provide a unique and revolutionary view of the Sun-Earth System. The two nearly identical observatories -- one ahead of Earth in its orbit, the other trailing behind -- will trace the flow of energy and matter from the Sun to Earth. They will reveal the 3D structure of coronal mass ejections; violent eruptions of matter from the sun that can disrupt satellites and power grids, and help us understand why they happen. STEREO will become a key addition to the fleet of space weather detection satellites by providing more accurate alerts for the arrival time of Earth-directed solar ejections with its unique side-viewing perspective.

    STEREO is the third mission in NASA's Solar Terrestrial Probes program within NASA's Science Mission Directorate, Washington. The Goddard Science and Exploration Directorate manages the mission, instruments, and science center. The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., designed and built the spacecraft and is responsible for mission operations. The imaging and particle detecting instruments were designed and built by scientific institutions in the U.S., UK, France, Germany, Belgium, Netherlands, and Switzerland. JPL is a division of the California Institute of Technology in Pasadena.

  15. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  16. Current STEREO Status on the Far Side of the Sun

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; Gurman, Joseph; Ossing, Daniel; Luhmann, Janet; Curtis, David; Schroeder, Peter; Mewaldt, Richard; Davis, Andrew; Wortman, Kristin; Russell, Christopher; Galvin, Antoinette; Kistler, Lynn; Ellis, Lorna; Howard, Russell; Vourlidas, Angelos; Rich, Nathan; Hutting, Lynn; Maksimovic, Milan; Bale, Stuart D.; Goetz, Keith

    2015-04-01

    The current positions of the two STEREO spacecraft on the opposite side of the Sun from Earth (superior solar conjunction) has forced some significant changes in the spacecraft and instrument operations. No communications are possible when the spacecraft is within 2 degrees of the Sun, requiring that the spacecraft be put into safe mode until communications can be restored. Unfortunately, communications were lost with the STEREO Behind spacecraft on October 1, 2014, during testing for superior solar conjunction operations. We will discuss what is known about the causes of loss of contact, the steps being taken to try to recover the Behind spacecraft, and what has been done to prevent a similar occurrence on STEREO Ahead.We will also discuss the effect of being on the far side of the Sun on the science operations of STEREO Ahead. Starting on August 20, 2014, the telemetry rate from the STEREO Ahead spacecraft has been tremendously reduced due to the need to keep the temperature of the feed horn on the high gain antenna below acceptable limits. However, the amount of telemetry that can be brought down has been highly reduced. Even so, significant science is still possible from STEREO's unique position on the solar far side. We will discuss the science and space weather products that are, or will be, available from each STEREO instrument, when those products will be available, and how they will be used. Some data, including the regular space weather beacon products, are brought down for an average of a few hours each day during the daily real-time passes, while the in situ and radio beacon data are being stored on the onboard recorder to provide a continuous 24-hour coverage for eventual downlink once the spacecraft is back to normal operations.

  17. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  18. STEREO as a "Planetary Hazards" Mission

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Thompson, B. J.

    2014-01-01

    NASA's twin STEREO probes, launched in 2006, have advanced the art and science of space weather forecasting more than any other spacecraft or solar observatory. By surrounding the Sun, they provide previously-impossible early warnings of threats approaching Earth as they develop on the solar far side. They have also revealed the 3D shape and inner structure of CMEs-massive solar storms that can trigger geomagnetic storms when they collide with Earth. This improves the ability of forecasters to anticipate the timing and severity of such events. Moreover, the unique capability of STEREO to track CMEs in three dimensions allows forecasters to make predictions for other planets, giving rise to the possibility of interplanetary space weather forecasting too. STEREO is one of those rare missions for which "planetary hazards" refers to more than one world. The STEREO probes also hold promise for the study of comets and potentially hazardous asteroids.

  19. KSC-06pd2261

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the STEREO spacecraft is being moved out of the high bay. A truck will transport the spacecraft to Launch Pad 17-B on Cape Canaveral Air Force Station where it will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  20. KSC-06pd2261a

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the transporter carrying the STEREO spacecraft is secured to the truck that will transport it to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad, the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  1. KSC-06pd2262

    NASA Image and Video Library

    2006-10-10

    KENNEDY SPACE CENTER, FLA. - At Astrotech Space Operations in Titusville, Fla., the transporter carrying the STEREO spacecraft is attached to the truck for transportation to Launch Pad 17-B on Cape Canaveral Air Force Station. At the pad the spacecraft will be lifted into the mobile service tower. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard Space Flight Center. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. STEREO is expected to lift off Oct. 25. Photo credit: NASA/George Shelton

  2. KSC-06pd2277

    NASA Image and Video Library

    2006-10-11

    KENNEDY SPACE CENTER, FLA. - Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, workers help guide the upper segement of the transportation canister away from the STEREO spacecraft. STEREO is being prepared for launch, scheduled for Oct. 25. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft that will launch in a piggyback mode, separating after reaching the appropriate orbit. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. The STEREO mission is managed by Goddard. The Applied Physics Laboratory designed and built the spacecraft. The laboratory will maintain command and control of the observatories throughout the mission, while NASA tracks and receives the data, determines the orbit of the satellites, and coordinates the science results. Photo credit: NASA/Jim Grossmann

  3. Multipoint connectivity analysis of the May 2007 solar energetic particle events

    NASA Astrophysics Data System (ADS)

    Chollet, E. E.; Mewaldt, R. A.; Cummings, A. C.; Gosling, J. T.; Haggerty, D. K.; Hu, Q.; Larson, D.; Lavraud, B.; Leske, R. A.; Opitz, A.; Roelof, E. C.; Russell, C. T.; Sauvaud, J.-A.

    2010-12-01

    In May of 2007, the STEREO Ahead and Behind spacecraft, along with the ACE spacecraft situated between the two STEREO spacecraft, observed two small solar energetic particle (SEP) events. STEREO-A and -B observed nearly identical time profiles in the 19 May event, but in the 23 May event, the protons arrived significantly earlier at STEREO-A than at STEREO-B and the time-intensity profiles were markedly different. We present SEP anisotropy, suprathermal electron pitch angle and solar wind data to demonstrate distortion in the magnetic field topology produced by the passage of multiple interplanetary coronal mass ejections on 22 and 23 May, causing the two spacecraft to magnetically connect to different points back at the Sun. This pair of events illustrates the power of multipoint observations in detailed interpretation of complex events, since only a small shift in observer location results in different magnetic field line connections and different SEP time-intensity profiles.

  4. Application of Stereo Vision to the Reconnection Scaling Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.

    The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less

  5. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  6. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  7. An enantiomer-based virtual screening approach: Discovery of chiral organophosphates as acetyl cholinesterase inhibitors.

    PubMed

    Zhang, Aiqian; Mu, Yunsong; Wu, Fengchang

    2017-04-01

    Chiral organophosphates (OPs) have been used widely around the world, very little is known about binding mechanisms with biological macromolecules. An in-depth understanding of the stereo selectivity of human AChE and discovering bioactive enantiomers of OPs can decrease health risks of these chiral chemicals. In the present study, a flexible molecular docking approach was conducted to investigate different binding modes of twelve phosphorus enantiomers. A pharmacophore model was then developed on basis of the bioactive conformations of these compounds. After virtual screening, twenty-four potential bioactive compounds were found, of which three compounds (Ethyl p-nitrophenyl phenylphosphonate (EPN), 1-naphthaleneacetic anhydride and N,4-dimethyl-N-phenyl-benzenesulfonamide) were tested by use of different in vitro assays. S-isomer of EPN was also found to exhibit greater inhibitory activity towards human AChE than the corresponding R-isomer. These findings affirm that stereochemistry plays a crucial role in virtual screening, and provide a new insight into designing safer organ phosphorus pesticides on human health. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars

    USGS Publications Warehouse

    Head, J.W.; Neukum, G.; Jaumann, R.; Hiesinger, H.; Hauber, E.; Carr, M.; Masson, P.; Foing, B.; Hoffmann, H.; Kreslavsky, M.; Werner, S.; Milkovich, S.; Van Gasselt, S.

    2005-01-01

    Images from the Mars Express HRSC (High-Resolution Stereo Camera) of debris aprons at the base of massifs in eastern Hellas reveal numerous concentrically ridged lobate and pitted features and related evidence of extremely ice-rich glacier-like viscous flow and sublimation. Together with new evidence for recent ice-rich rock glaciers at the base of the Olympus Mons scarp superposed on larger Late Amazonian debris-covered piedmont glaciers, we interpret these deposits as evidence for geologically recent and recurring glacial activity in tropical and mid-latitude regions of Mars during periods of increased spin-axis obliquity when polar ice was mobilized and redeposited in microenvironments at lower latitudes. The data indicate that abundant residual ice probably remains in these deposits and that these records of geologically recent climate changes are accessible to future automated and human surface exploration.

  9. Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars.

    PubMed

    Head, J W; Neukum, G; Jaumann, R; Hiesinger, H; Hauber, E; Carr, M; Masson, P; Foing, B; Hoffmann, H; Kreslavsky, M; Werner, S; Milkovich, S; van Gasselt, S

    2005-03-17

    Images from the Mars Express HRSC (High-Resolution Stereo Camera) of debris aprons at the base of massifs in eastern Hellas reveal numerous concentrically ridged lobate and pitted features and related evidence of extremely ice-rich glacier-like viscous flow and sublimation. Together with new evidence for recent ice-rich rock glaciers at the base of the Olympus Mons scarp superposed on larger Late Amazonian debris-covered piedmont glaciers, we interpret these deposits as evidence for geologically recent and recurring glacial activity in tropical and mid-latitude regions of Mars during periods of increased spin-axis obliquity when polar ice was mobilized and redeposited in microenvironments at lower latitudes. The data indicate that abundant residual ice probably remains in these deposits and that these records of geologically recent climate changes are accessible to future automated and human surface exploration.

  10. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  11. Reversed stereo depth and motion direction with anti-correlated stimuli.

    PubMed

    Read, J C; Eagle, R A

    2000-01-01

    We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.

  12. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  13. KSC-06pd1546

    NASA Image and Video Library

    2006-07-13

    KENNEDY SPACE CENTER, FLA. - At Launch Pad 17-B on Cape Canaveral Air Force Station, workers prepare the Boeing Delta II second stage for the STEREO launch to be lifted off the transporter. The second stage then will be lifted into the mobile service tower and mated with first stage already in place. STEREO stands for Solar Terrestrial Relations Observatory and comprises two spacecraft. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off in August 2006. Photo credit: NASA/George Shelton

  14. KSC-06pd1868

    NASA Image and Video Library

    2006-08-11

    KENNEDY SPACE CENTER, FLA. - The STEREO observatories are the focus of attention at a media viewing held at Astrotech Space Operations in Titusville, Fla., on Aug. 11. The two observatories were mated for launch but will separate into different orbits for their mission. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  15. KSC-06pd1867

    NASA Image and Video Library

    2006-08-11

    KENNEDY SPACE CENTER, FLA. - The STEREO observatories are the focus of attention at a media viewing held at Astrotech Space Operations in Titusville, Fla., on Aug. 11. The two observatories were mated for launch but will separate into different orbits for their mission. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  16. KSC-06pd1535

    NASA Image and Video Library

    2006-07-10

    KENNEDY SPACE CENTER, FLA. - In the hazardous processing facility at Astrotech Space Operations in Titusville, Fla., technicians remove the protective cover from the top of Observatory A, one of two STEREO spacecraft. The observatory will be lifted onto a scale for weight measurements and later will be fueled. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket no earlier than Aug. 1. Photo credit: NASA/Jack Pfaller

  17. KSC-06pd1864

    NASA Image and Video Library

    2006-08-11

    KENNEDY SPACE CENTER, FLA. - The STEREO observatories are the focus of attention at a media viewing held at Astrotech Space Operations in Titusville, Fla., on Aug. 11. The two observatories were mated for launch but will separate into different orbits for their mission. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  18. KSC-06pd1534

    NASA Image and Video Library

    2006-07-10

    KENNEDY SPACE CENTER, FLA. - In the hazardous processing facility at Astrotech Space Operations in Titusville, Fla., technicians begin removing the protective cover from Observatory A of the STEREO spacecraft. The observatory will be lifted onto a scale for weight measurements and later will be fueled. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket no earlier than Aug. 1. Photo credit: NASA/Jack Pfaller

  19. KSC-06pd1862

    NASA Image and Video Library

    2006-08-11

    KENNEDY SPACE CENTER, FLA. - The STEREO observatories are the focus of attention at a media viewing held at Astrotech Space Operations in Titusville, Fla. The two observatories were mated for launch but will separate into different orbits for their mission. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off on Aug. 31, from Launch Pad 17-B on Cape Canaveral Air Force Station in Florida. Photo credit: NASA/George Shelton.

  20. KSC-06pd1533

    NASA Image and Video Library

    2006-07-10

    KENNEDY SPACE CENTER, FLA. - In the hazardous processing facility at Astrotech Space Operations in Titusville, Fla., technicians begin removing the protective cover from Observatory A of the STEREO spacecraft. The observatory will be lifted onto a scale for weight measurements and later will be fueled. STEREO stands for Solar Terrestrial Relations Observatory. The STEREO mission is the first to take measurements of the sun and solar wind in 3-dimension. This new view will improve our understanding of space weather and its impact on the Earth. STEREO is expected to lift off aboard a Boeing Delta II rocket no earlier than Aug. 1. Photo credit: NASA/Jack Pfaller

Top