Sample records for surface stereo imager

  1. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  2. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  3. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  4. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  5. The Colour and Stereo Surface Imaging System (CaSSIS) for the ExoMars Trace Gas Orbiter

    USGS Publications Warehouse

    Thomas, N.; Cremonese, G.; Ziethe, R.; Gerber, M.; Brändli, M.; Bruno, G.; Erismann, M.; Gambicorti, L.; Gerber, T.; Ghose, K.; Gruber, M.; Gubler, P.; Mischler, H.; Jost, J.; Piazza, D.; Pommerol, A.; Rieder, M.; Roloff, V.; Servonet, A.; Trottmann, W.; Uthaicharoenpong, T.; Zimmermann, C.; Vernani, D.; Johnson, M.; Pelò, E.; Weigel, T.; Viertl, J.; De Roux, N.; Lochmatter, P.; Sutter, G.; Casciello, A.; Hausner, T.; Ficai Veltroni, I.; Da Deppo, V.; Orleanski, P.; Nowosielski, W.; Zawistowski, T.; Szalai, S.; Sodor, B.; Tulyakov, S.; Troznai, G.; Banaskiewicz, M.; Bridges, J.C.; Byrne, S.; Debei, S.; El-Maarry, M. R.; Hauber, E.; Hansen, C.J.; Ivanov, A.; Keszthelyil, L.; Kirk, Randolph L.; Kuzmin, R.; Mangold, N.; Marinangeli, L.; Markiewicz, W. J.; Massironi, M.; McEwen, A.S.; Okubo, Chris H.; Tornabene, L.L.; Wajer, P.; Wray, J.J.

    2017-01-01

    The Colour and Stereo Surface Imaging System (CaSSIS) is the main imaging system onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO) which was launched on 14 March 2016. CaSSIS is intended to acquire moderately high resolution (4.6 m/pixel) targeted images of Mars at a rate of 10–20 images per day from a roughly circular orbit 400 km above the surface. Each image can be acquired in up to four colours and stereo capability is foreseen by the use of a novel rotation mechanism. A typical product from one image acquisition will be a 9.5 km×∼45 km">9.5 km×∼45 km9.5 km×∼45 km swath in full colour and stereo in one over-flight of the target thereby reducing atmospheric influences inherent in stereo and colour products from previous high resolution imagers. This paper describes the instrument including several novel technical solutions required to achieve the scientific requirements.

  6. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  7. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  8. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    NASA Astrophysics Data System (ADS)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity map for the stereo pair and each correspondence is transformed back to the owner and 3D points are derived through photogrammetric space intersection. Experimental results reveal that the proposed method is able to reduce gaps and inconsistencies caused by the inaccurate boresight offsets between the two NAC cameras and the irregular overlapping regions, and finally generate precise and consistent 3D surface models from the NAC stereo images automatically.

  9. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  10. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.

    1982-01-01

    The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.

  11. An application of the MPP to the interactive manipulation of stereo images of digital terrain models

    NASA Technical Reports Server (NTRS)

    Pol, Sanjay; Mcallister, David; Davis, Edward

    1987-01-01

    Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.

  12. LROC Stereo Observations

    NASA Astrophysics Data System (ADS)

    Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.

    2009-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.

  13. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold K. P.

    1994-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.

  14. Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration

    PubMed Central

    Chen, Shoubin; Liu, Jingbin; Huang, Wenchao

    2018-01-01

    The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540

  15. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  16. a Photogrammetric Pipeline for the 3d Reconstruction of Cassis Images on Board Exomars Tgo

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Mudric, T.; Pommerol, A.; Thomas, N.; Cremonese, G.

    2017-07-01

    CaSSIS (Colour and Stereo Surface Imaging System) is the stereo imaging system onboard the European Space Agency and ROSCOSMOS ExoMars Trace Gas Orbiter (TGO) that has been launched on 14 March 2016 and entered a Mars elliptical orbit on 19 October 2016. During the first bounded orbits, CaSSIS returned its first multiband images taken on 22 and 26 November 2016. The telescope acquired 11 images, each composed by 30 framelets, of the Martian surface near Hebes Chasma and Noctis Labyrithus regions reaching at closest approach at a distance of 250 km from the surface. Despite of the eccentricity of this first orbit, CaSSIS has provided one stereo pair with a mean ground resolution of 6 m from a mean distance of 520 km. The team at the Astronomical Observatory of Padova (OAPD-INAF) is involved into different stereo oriented missions and it is realizing a software for the generation of Digital Terrain Models from the CaSSIS images. The SW will be then adapted also for other projects involving stereo camera systems. To compute accurate 3D models, several sequential methods and tools have been developed. The preliminary pipeline provides: the generation of rectified images from the CaSSIS framelets, a matching core and post-processing methods. The software includes in particular: an automatic tie points detection by the Speeded Up Robust Features (SURF) operator, an initial search for the correspondences through Normalize Cross Correlation (NCC) algorithm and the Adaptive Least Square Matching (LSM) algorithm in a hierarchical approach. This work will show a preliminary DTM generated by the first CaSSIS stereo images.

  17. Quantitative fractography by digital image processing: NIH Image macro tools for stereo pair analysis and 3-D reconstruction.

    PubMed

    Hein, L R

    2001-10-01

    A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.

  18. Stereo View of Phoenix Test Sample Site

    NASA Image and Video Library

    2008-06-02

    This anaglyph image, acquired by NASA’s Phoenix Lander’s Surface Stereo Imager on June 1, 2008, shows a stereoscopic 3D view of the so-called Knave of Hearts first-dig test area to the north of the lander. 3D glasses are necessary to view this image.

  19. Forest abovegroundbiomass mapping using spaceborne stereo imagery acquired by Chinese ZY-3

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ni, W.; Zhang, Z.; Xiong, C.

    2015-12-01

    Besides LiDAR data, another valuable type of data which is also directly sensitive to forest vertical structures and more suitable for regional mapping of forest biomass is the stereo imagery or photogrammetry. Photogrammetry is the traditional technique for deriving terrain elevation. The elevation of the top of a tree canopy can be directly measured from stereo imagery but winter images are required to get the elevation of ground surface because stereo images are acquired by optical sensors which cannot penetrate dense forest canopies with leaf-on condition. Several spaceborne stereoscopic systems with higher spatial resolutions have been launched in the past several years. For example the Chinese satellite Zi Yuan 3 (ZY-3) specifically designed for the collection of stereo imagery with a resolution of 3.6 m for forward and backward views and 2.1 m for the nadir view was launched on January 9, 2012. Our previous studies have demonstrated that the spaceborne stereo imagery acquired in summer has good performance on the description of forest structures. The ground surface elevation could be extracted from spaceborne stereo imagery acquired in winter. This study mainly focused on assessing the mapping of forest biomass through the combination of spaceborne stereo imagery acquired in summer and those in winter. The test sites of this study located at Daxing AnlingMountains areas as shown in Fig.1. The Daxing Anling site is on the south border of boreal forest belonging to frigid-temperate zone coniferous forest vegetation The dominant tree species is Dhurian larch (Larix gmelinii). 10 scenes of ZY-3 stereo images are used in this study. 5 scenes were acquired on March 14,2012 while the other 5 scenes were acquired on September 7, 2012. Their spatial coverage is shown in Fig.2-a. Fig.2-b is the mosaic of nadir images acquired on 09/07/2012 while Fig.2-c is thecorresponding digital surface model (DSM) derived from stereo images acquired on 09/07/2012. Fig.2-d is the difference between the DSM derived from stereo imagery acquired on 09/07/2012 and the digital elevation model (DEM) from stereo imagery acquired on 03/14/2012.The detailed analysis will be given in the final report.

  20. New Topographic Maps of Io Using Voyager and Galileo Stereo Imaging and Photoclinometry

    NASA Astrophysics Data System (ADS)

    White, O. L.; Schenk, P. M.; Hoogenboom, T.

    2012-03-01

    Stereo and photoclinometry processing have been applied to Voyager and Galileo images of Io in order to derive regional- and local-scale topographic maps of 20% of the moon’s surface to date. We present initial mapping results.

  1. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  2. Mapping Io's Surface Topography Using Voyager and Galileo Stereo Images and Photoclinometry

    NASA Astrophysics Data System (ADS)

    White, O. L.; Schenk, P.

    2011-12-01

    O.L. White and P.M. Schenk Lunar and Planetary Institute, 3600 Bay Area Boulevard, Houston, Texas, 77058 No instrumentation specifically designed to measure the topography of a planetary surface has ever been deployed to any of the Galilean satellites. Available methods that exist to perform such a task in the absence of the relevant instrumentation include photoclinometry, shadow length measurement, and stereo imaging. Stereo imaging is generally the most accurate of these methods, but is subject to limitations. Io is a challenging subject for stereo imaging given that much of its surface is comprised of volcanic plains, smooth at the resolution of many of the available global images. Radiation noise in Galileo images can also complicate mapping. Paterae, mountains and a few tall shield volcanoes, the only features of any considerable relief, exist as isolated features within these plains; previous research concerning topography measurement on Io using stereo imaging has focused on these features, and has been localized in its scope [Schenk et al., 2001; Schenk et al., 2004]. With customized ISIS software developed at LPI, it is the ultimate intention of our research to use stereo and photoclinometry processing of Voyager and Galileo images to create a global topographic map of Io that will constrain the shapes of local- and regional-scale features on this volcanic moon, and which will be tied to the global shape model of Thomas et al. [1998]. Applications of these data include investigation of how global heat flow varies across the moon and its relation to mantle convection and tidal heating [Tackley et al., 2001], as well as its correlation with local geology. Initial stereo mapping has focused on the Ra Patera/Euboea Montes/Acala Fluctus area, while initial photoclinometry mapping has focused on several paterae and calderas across Io. The results of both stereo and photoclinometry mapping have indicated that distinct topographic areas may correlate with surface geology. To date we have obtained diameter and depth measurements for ten calderas using these DEMs, and we look forward to studying regional and latitudinal variation in caldera depth. References Schenk, P.M., et al. (2001) J. Geophys. Res., 106, pp. 33,201-33,222. Schenk, P.M., et al. (2004) Icarus, 169, pp. 98-110. Tackley, P.J., et al. (2001) Icarus, 149, pp. 79-93. Thomas, P., et al. (1998) Icarus, 135, pp. 175-180. The authors acknowledge the support of the NASA Outer Planet Research and the Planetary Geology and Geophysics research programs.

  3. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vetrone, A. V.; Martin, M. D.

    1980-01-01

    The extremely long missions of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of which can be used to form stereo images allowing the earth-bound student of Mars to examine the subject in 3-D. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set. Since that data set is still growing (January, 1980, about 3 1/2 years after the mission began), a second edition of this catalog is planned with completion expected about November, 1980.

  4. Slant Perception Under Stereomicroscopy.

    PubMed

    Horvath, Samantha; Macdonald, Kori; Galeotti, John; Klatzky, Roberta L

    2017-11-01

    Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.

  5. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  6. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  7. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  8. Optic probe for multiple angle image capture and optional stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2016-11-29

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  9. Key characteristics of specular stereo

    PubMed Central

    Muryy, Alexander A.; Fleming, Roland W.; Welchman, Andrew E.

    2014-01-01

    Because specular reflection is view-dependent, shiny surfaces behave radically differently from matte, textured surfaces when viewed with two eyes. As a result, specular reflections pose substantial problems for binocular stereopsis. Here we use a combination of computer graphics and geometrical analysis to characterize the key respects in which specular stereo differs from standard stereo, to identify how and why the human visual system fails to reconstruct depths correctly from specular reflections. We describe rendering of stereoscopic images of specular surfaces in which the disparity information can be varied parametrically and independently of monocular appearance. Using the generated surfaces and images, we explain how stereo correspondence can be established with known and unknown surface geometry. We show that even with known geometry, stereo matching for specular surfaces is nontrivial because points in one eye may have zero, one, or multiple matches in the other eye. Matching features typically yield skew (nonintersecting) rays, leading to substantial ortho-epipolar components to the disparities, which makes deriving depth values from matches nontrivial. We suggest that the human visual system may base its depth estimates solely on the epipolar components of disparities while treating the ortho-epipolar components as a measure of the underlying reliability of the disparity signals. Reconstructing virtual surfaces according to these principles reveals that they are piece-wise smooth with very large discontinuities close to inflection points on the physical surface. Together, these distinctive characteristics lead to cues that the visual system could use to diagnose specular reflections from binocular information. PMID:25540263

  10. Surface Stereo Imager on Mars, Side View

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  11. An Integrated Photogrammetric and Photoclinometric Approach for Pixel-Resolution 3d Modelling of Lunar Surface

    NASA Astrophysics Data System (ADS)

    Liu, W. C.; Wu, B.

    2018-04-01

    High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.

  12. Photonic Doppler velocimetry lens array probe incorporating stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2015-09-01

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  13. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  14. Interactive stereo electron microscopy enhanced with virtual reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-12-17

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicronmore » diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the measurement system. After calibration, the system is used to take distance and angle measurements of clinical specimens.« less

  15. The MVACS Robotic Arm Camera

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  16. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  17. Venus surface roughness and Magellan stereo data

    NASA Technical Reports Server (NTRS)

    Maurice, Kelly E.; Leberl, Franz W.; Norikane, L.; Hensley, Scott

    1994-01-01

    Presented are results of some studies to develop tools useful for the analysis of Venus surface shape and its roughness. Actual work was focused on Maxwell Montes. The analyses employ data acquired by means of NASA's Magellan satellite. The work is primarily concerned with deriving measurements of the Venusian surface using Magellan stereo SAR. Roughness was considered by means of a theoretical analyses based on digital elevation models (DEM's), on single Magellan radar images combined with radiometer data, and on the use of multiple overlapping Magellan radar images from cycles 1, 2, and 3, again combined with collateral radiometer data.

  18. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived topographic accuracy.

  19. GLD100 - Lunar topography from LROC WAC stereo

    NASA Astrophysics Data System (ADS)

    Scholten, F.; Oberst, J.; Robinson, M. S.

    2011-10-01

    The LROC WAC instrument of the LRO mission comprises substantial stereo image data from adjacent orbits. Multiple coverage of the entire surface of the Moon at a mean ground scale of 75 m/pxl has already been achieved within the first two years of the mission. We applied photogrammetric stereo processing methods for the derivation of a 100 m raster DTM (digital terrain model), called GLD100, from several tens of thousands stereo models. The GLD100 covers the lunar surface between 80° northern and southern latitude. Polar regions are excluded because of poor illumination and stereo conditions. Vertical differences of the GLD100 to altimetry data from the LRO LOLA instrument are small, the mean deviation is typically about 20 m, without systematic lateral or vertical offsets.

  20. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  1. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  2. Syntactic Approach To Geometric Surface Shell Determination

    NASA Astrophysics Data System (ADS)

    DeGryse, Donald G.; Panton, Dale J.

    1980-12-01

    Autonomous terminal homing of a smart missile requires a stored reference scene of the target for which the missle is destined. The reference scene is produced from stereo source imagery by deriving a three-dimensional model containing cultural structures such as buildings, towers, bridges, and tanks. This model is obtained by the precise matching of cultural features from one image of the stereo pair to the other. In the past, this stereo matching process has relied heavily on local edge operators and a gray scale matching metric. The processing is performed line by line over the imagery and the amount of geometric control is minimal. As a result, the gross structure of the scene is determined but the derived three-dimensional data is noisy, oscillatory, and at times significantly inaccurate. This paper discusses new concepts that are currently being developed to stabilize this geometric reference preparation process. The new concepts involve the use of a structural syntax which will be used as a geometric constraint on automatic stereo matching. The syntax arises from the stereo configuration of the imaging platforms at the time of exposure and the knowledge of how various cultural structures are constructed. The syntax is used to parse a scene in terms of its cultural surfaces and to dictate to the matching process the allowable relative positions and orientations of surface edges in the image planes. Using the syntax, extensive searches using a gray scale matching metric are reduced.

  3. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  4. Space-time measurements of oceanic sea states

    NASA Astrophysics Data System (ADS)

    Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice

    2013-10-01

    Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

  5. Eruptive Trio Seen by STEREO

    NASA Image and Video Library

    2017-12-08

    NASA image acquired May 1, 2010. As an active region rotated into view, it blew out three relatively small eruptions over about two days (Apr. 30 - May 2) as STEREO (Ahead) observed in extreme UV light. The first one was the largest and exhibited a pronounced twisting motion (shown in the still from May 1, 2010). The plasma, not far above the Sun's surface in these images, is ionized Helium heated to about 60,000 degrees. Note, too, the movement of plasma flowing along magnetic field lines that extend out beyond and loop back into the Sun's surface. Such activity occurs every day and is part of the dynamism of the changing Sun. Credit: NASA/GSFC/STEREO To learn more about STEREO go to: soho.nascom.nasa.gov/home.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  6. Surface Stereo Imager on Mars, Face-On

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long. The two 'eyes' of the SSI seen in this image can take photos to create three-dimensional views of the landing site.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Simultaneous glacier surface elevation and flow velocity mapping from cross-track pushbroom satellite Imagery

    NASA Astrophysics Data System (ADS)

    Noh, M. J.; Howat, I. M.

    2017-12-01

    Glaciers and ice sheets are changing rapidly. Digital Elevation Models (DEMs) and Velocity Maps (VMs) obtained from repeat satellite imagery provide critical measurements of changes in glacier dynamics and mass balance over large, remote areas. DEMs created from stereopairs obtained during the same satellite pass through sensor re-pointing (i.e. "in-track stereo") have been most commonly used. In-track stereo has the advantage of minimizing the time separation and, thus, surface motion between image acquisitions, so that the ice surface can be assumed motionless in when collocating pixels between image pairs. Since the DEM extraction process assumes that all motion between collocated pixels is due to parallax or sensor model error, significant ice motion results in DEM quality loss or failure. In-track stereo, however, puts a greater demand on satellite tasking resources and, therefore, is much less abundant than single-scan imagery. Thus, if ice surface motion can be mitigated, the ability to extract surface elevation measurements from pairs of repeat single-scan "cross-track" imagery would greatly increase the extent and temporal resolution of ice surface change. Additionally, the ice motion measured by the DEM extraction process would itself provide a useful velocity measurement. We develop a novel algorithm for generating high-quality DEMs and VMs from cross-track image pairs without any prior information using the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm and its sensor model bias correction capabilities. Using a test suite of repeat, single-scan imagery from WorldView and QuickBird sensors collected over fast-moving outlet glaciers, we develop a method by which RPC biases between images are first calculated and removed over ice-free surfaces. Subpixel displacements over the ice are then constrained and used to correct the parallax estimate. Initial tests yield DEM results with the same quality as in-track stereo for cases where snowfall has not occurred between the two images and when the images have similar ground sample distances. The resulting velocity map also closely matches independent measurements.

  8. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  9. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  10. Three-dimensional surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.

    2009-10-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  11. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  12. Pluto in 3-D

    NASA Image and Video Library

    2015-10-23

    Global stereo mapping of Pluto surface is now possible, as images taken from multiple directions are downlinked from NASA New Horizons spacecraft. Stereo images will eventually provide an accurate topographic map of most of the hemisphere of Pluto seen by New Horizons during the July 14 flyby, which will be key to understanding Pluto's geological history. This example, which requires red/blue stereo glasses for viewing, shows a region 180 miles (300 kilometers) across, centered near longitude 130 E, latitude 20 N (the red square in the global context image). North is to the upper left. The image shows an ancient, heavily cratered region of Pluto, dotted with low hills and cut by deep fractures, which indicate extension of Pluto's crust. Analysis of these stereo images shows that the steep fracture in the upper left of the image is about 1 mile (1.6 kilometers) deep, and the craters in the lower right part of the image are up to 1.3 miles (2.1 km) deep. Smallest visible details are about 0.4 miles (0.6 kilometers) across. You will need 3D glasses to view this image showing an ancient, heavily cratered region of Pluto. http://photojournal.jpl.nasa.gov/catalog/PIA20032

  13. Forest Biomass Mapping from Stereo Imagery and Radar Data

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ni, W.; Zhang, Z.

    2013-12-01

    Both InSAR and lidar data provide critical information on forest vertical structure, which are critical for regional mapping of biomass. However, the regional application of these data is limited by the availability and acquisition costs. Some researchers have demonstrated potentials of stereo imagery in the estimation of forest height. Most of these researches were conducted on aerial images or spaceborne images with very high resolutions (~0.5m). Space-born stereo imagers with global coverage such as ALOS/PRISM have coarser spatial resolutions (2-3m) to achieve wider swath. The features of stereo images are directly affected by resolutions and the approaches use by most of researchers need to be adjusted for stereo imagery with lower resolutions. This study concentrated on analyzing the features of point clouds synthesized from multi-view stereo imagery over forested areas. The small footprint lidar and lidar waveform data were used as references. The triplets of ALOS/PRISM data form three pairs (forward/nadir, backward/nadir and forward/backward) of stereo images. Each pair of the stereo images can be used to generate points (pixels) with 3D coordinates. By carefully co-register these points from three pairs of stereo images, a point cloud data was generated. The height of each point above ground surface was then calculated using DEM from National Elevation Dataset, USGS as the ground surface elevation. The height data were gridded into pixel of different sizes and the histograms of the points within a pixel were analyzed. The average height of the points within a pixel was used as the height of the pixel to generate a canopy height map. The results showed that the synergy of point clouds from different views were necessary, which increased the point density so the point cloud could detect the vertical structure of sparse and unclosed forests. The top layer of multi-layered forest could be captured but the dense forest prevented the stereo imagery to see through. The canopy height map exhibited spatial patterns of roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlation with RH50 of LVIS data at 30m pixel size with a gain of 1.04, bias of 4.3m and R2 of 0.74 (Fig. 1). The canopy height map from PRISM and dual-pol PALSAR data were used together to map biomass in our study area near Howland, Maine, and the results were evaluated using biomass map generated from LVIS waveform data independently. The results showed that adding CHM from PRISM significantly improved biomass accuracy and raised the biomass saturation level of L-band SAR data in forest biomass mapping.

  14. MISR Stereo-heights of Grassland Fire Smoke Plumes in Australia

    NASA Astrophysics Data System (ADS)

    Mims, S. R.; Kahn, R. A.; Moroney, C. M.; Gaitley, B. J.; Nelson, D. L.; Garay, M. J.

    2008-12-01

    Plume heights from wildfires are used in climate modeling to predict and understand trends in aerosol transport. This study examines whether smoke from grassland fires in the desert region of Western and central Australia ever rises above the relatively stable atmospheric boundary layer and accumulates in higher layers of relative atmospheric stability. Several methods for deriving plume heights from the Multi-angle Imaging SpectroRadiometer (MISR) instrument are examined for fire events during the summer 2000 and 2002 burning seasons. Using MISR's multi-angle stereo-imagery from its three near-nadir-viewing cameras, an automatic algorithm routinely derives the stereo-heights above the geoid of the level-of-maximum-contrast for the entire global data set, which often correspond to the heights of clouds and aerosol plumes. Most of the fires that occur in the cases studied here are small, diffuse, and difficult to detect. To increase the signal from these thin hazes, the MISR enhanced stereo product that computes stereo heights from the most steeply viewing MISR cameras is used. For some cases, a third approach to retrieving plume heights from MISR stereo imaging observations, the MISR Interactive Explorer (MINX) tool, is employed to help differentiate between smoke and cloud. To provide context and to search for correlative factors, stereo-heights are combined with data providing fire strength from the Moderate-resolution Imaging Spectroradiometer (MODIS) instrument, atmospheric structure from the NCEP/NCAR Reanalysis Project, surface cover from the Australia National Vegetation Information System, and forward and backward trajectories from the NOAA HYSPLIT model. Although most smoke plumes concentrate in the near-surface boundary layer, as expected, some appear to rise higher. These findings suggest that a closer examination of grassland fire energetics may be warranted.

  15. Multiview photometric stereo.

    PubMed

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  16. A digital system for surface reconstruction

    USGS Publications Warehouse

    Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.

    1996-01-01

    A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.

  17. Quantifying cortical surface harmonic deformation with stereovision during open cranial neurosurgery

    NASA Astrophysics Data System (ADS)

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Paulsen, Keith D.

    2012-02-01

    Cortical surface harmonic motion during open cranial neurosurgery is well observed in image-guided neurosurgery. Recently, we quantified cortical surface deformation noninvasively with synchronized blood pressure pulsation (BPP) from a sequence of stereo image pairs using optical flow motion tracking. With three subjects, we found the average cortical surface displacement can reach more than 1 mm and in-plane principal strains of up to 7% relative to the first image pair. In addition, the temporal changes in deformation and strain were in concert with BPP and patient respiration [1]. However, because deformation was essentially computed relative to an arbitrary reference, comparing cortical surface deformation at different times was not possible. In this study, we extend the technique developed earlier by establishing a more reliable reference profile of the cortical surface for each sequence of stereo image acquisitions. Specifically, fast Fourier transform (FFT) was applied to the dynamic cortical surface deformation, and the fundamental frequencies corresponding to patient respiration and BPP were identified, which were used to determine the number of image acquisitions for use in averaging cortical surface images. This technique is important because it potentially allows in vivo characterization of soft tissue biomechanical properties using intraoperative stereovision and motion tracking.

  18. [Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].

    PubMed

    Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi

    2007-12-20

    In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.

  19. Computer-generated imagery for 4-D meteorological data

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.

    1986-01-01

    The University of Wisconsin-Madison Space Science and Engineering Center is developing animated stereo display terminals for use with McIDAS (Man-computer Interactive Data Access System). This paper describes image-generation techniques which have been developed to take maximum advantage of these terminals, integrating large quantities of four-dimensional meteorological data from balloon and satellite soundings, satellite images, Doppler and volumetric radar, and conventional surface observations. The images have been designed to use perspective, shading, hidden-surface removal, and transparency to augment the animation and stereo-display geometry. They create an illusion of a moving three-dimensional model of the atmosphere. This paper describes the design of these images and a number of rules of thumb for generating four-dimensional meteorological displays.

  20. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  1. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  2. MRO CTX-based Digital Terrain Models

    NASA Astrophysics Data System (ADS)

    Dumke, Alexander

    2016-04-01

    In planetary surface sciences, digital terrain models (DTM) are paramount when it comes to understanding and quantifying processes. In this contribution an approach for the derivation of digital terrain models from stereo images of the NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) are described. CTX consists of a 350 mm focal length telescope and 5000 CCD sensor elements and is operated as pushbroom camera. It acquires images with ~6 m/px over a swath width of ~30 km of the Mars surface [1]. Today, several approaches for the derivation of CTX DTMs exist [e. g. 2, 3, 4]. The discussed approach here is based on established software and combines them with proprietary software as described below. The main processing task for the derivation of CTX stereo DTMs is based on six steps: (1) First, CTX images are radiometrically corrected using the ISIS software package [5]. (2) For selected CTX stereo images, exterior orientation data from reconstructed NAIF SPICE data are extracted [6]. (3) In the next step High Resolution Stereo Camera (HRSC) DTMs [7, 8, 9] are used for the rectification of CTX stereo images to reduce the search area during the image matching. Here, HRSC DTMs are used due to their higher spatial resolution when compared to MOLA DTMs. (4) The determination of coordinates of homologous points between stereo images, i.e. the stereo image matching process, consists of two steps: first, a cross-correlation to obtain approximate values and secondly, their use in a least-square matching (LSM) process in order to obtain subpixel positions. (5) The stereo matching results are then used to generate object points from forward ray intersections. (6) As a last step, the DTM-raster generation is performed using software developed at the German Aerospace Center, Berlin. Whereby only object points are used that have a smaller error than a threshold value. References: [1] Malin, M. C. et al., 2007, JGR 112, doi:10.1029/2006JE002808 [2] Broxton, M. J. et al., 2008, LPSC XXXIX, Abstract#2419 [3] Yershov, V. et al., 2015 EPSC 10, EPSC2015-343 [4] Kim, J. R. et al., 2013 EPS 65, 799-809 [5] https://isis.astrogeology.usgs.gov/index.html [6] http://naif.jpl.nasa.gov/naif/index.html [7] Gwinner et al., 2010, EPS 294, 543-540 [8] Gwinner et al., 2015, PSS [9] Dumke, A. et al., 2008, ISPRS, 37, Part B4, 1037-1042

  3. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo

    NASA Astrophysics Data System (ADS)

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery.

  4. Indoor calibration for stereoscopic camera STC: a new method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.

  5. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.

  6. The HRSC Experiment on Mars Express: First Imaging Results from the Commissioning Phase

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Neukum, G.; Hoffmann, H.; Jaumann, R.; Hauber, E.; Albertz, J.; McCord, T. B.; Markiewicz, W. J.

    2004-12-01

    The ESA Mars Express spacecraft was launched from Baikonur on June 2, 2003, entered Mars orbit on December 25, 2003, and reached the nominal mapping orbit on January 28, 2004. Observing conditions were favorable early on for the HRSC (High Resolution Stereo Camera), designed for the mapping of the Martian surface in 3-D. The HRSC is a pushbroom scanner with 9 CCD line detectors mounted in parallel and perpendicular to the direction of flight on the focal plane. The camera can obtain images at high resolution (10 m/pix), in triple stereo (20 m/pix), in four colors, and at five different phase angles near-simultaneously. An additional Super-Resolution Channel (SRC) yields nested-in images at 2.3 m/pix for detailed photogeologic studies. Even for nominal spacecraft trajectory and camera pointing data from the commissioning phase, solid stereo image reconstructions are feasible. More yet, the three-line stereo data allow us to identify and correct errors in navigation data. We find that > 99% of the stereo rays intersect within a sphere of radius < 20m after orbit and pointing data correction. From the HRSC images we have produced Digital Terrain Models (DTMs) with pixel sizes of 200 m, some of them better. HRSC stereo models and data obtained by the MOLA (Mars Orbiting Laser Altimeter) show good qualitative agreement. Differences in absolute elevations are within 50 m, but may reach several 100 m in lateral positioning (mostly in the spacecraft along-track direction). After correction of these offsets, the HRSC topographic data conveniently fill the gaps between the MOLA tracks and reveal hitherto unrecognized morphologic detail. At the time of writing, the HRSC has covered approx. 22.5 million square kilometers of the Martian surface. In addition, data from 5 Phobos flybys from May through August 2004 were obtained. The HRSC is beginning to make major contributions to geoscience, atmospheric science, photogrammetry, and cartography of Mars (papers submitted to Nature).

  7. A Three-Dimensional View of Titan's Surface Features from Cassini RADAR Stereogrammetry

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Redding, B. L.; Becker, T. L.; Lee, E. M.; Stiles, B. W.; Hensley, S.; Hayes, A.; Lopes, R. M.; Lorenz, R. D.; Mitchell, K. L.; Radebaugh, J.; Paganelli, F.; Soderblom, L. A.; Stofan, E. R.; Wood, C. A.; Wall, S. D.; Cassini RADAR Team

    2008-12-01

    As of the end of its four-year Prime Mission, Cassini has obtained 300-1500 m resolution synthetic aperture radar images of the surface of Titan during 19 flybys. The elongated image swaths overlap extensively, and ~2% of the surface has now been imaged two or more times. The majority of image pairs have different viewing directions, and thus contain stereo parallax that encodes information about Titan's surface relief over distances of ~1 km and greater. As we have previously reported, the first step toward extracting quantitative topographic information was the development of rigorous "sensor models" that allowed the stereo systems previously used at the USGS and JPL to map Venus with Magellan images to be used for Titan mapping. The second major step toward extensive topomapping of Titan has been the reprocessing of the RADAR images based on an improved model of the satellite's rotation. Whereas the original images (except for a few pairs obtained at similar orbital phase, some of which we have mapped previously) were offset by as much as 30 km, the new versions align much better. The remaining misalignments, typically <1 km, can be removed by a least-squares adjustment of the spacecraft trajectories before mapping, which also ensures that the stereo digital topographic models (DTMs) are made consistent with altimetry and SAR topography profiles. The useful stereo coverage now available includes a much larger portion of Titan's north polar lake country than we previously presented, a continuous traverse of high resolution data from the lakes to mid-southern latitudes, and widely distributed smaller areas. A remaining challenge is that many pairs of images are illuminated from opposite sides or from near-perpendicular directions, which can make image matching more difficult. We find that the high-contrast polarizing display of the stereo workstation at USGS provides a much clearer view of these unfavorably illuminated pairs than (for example) anaglyphs, and lets us supplement automatic image matching with interactive measurements where the former fails. We are collecting DTMs of all usable image pairs and will present the most interesting results. Examples of geologic questions that may be addressed are: What is the relation between Ganesa and surrounding features? Is it a dome or shield? Can the height of Titan's dunes be measured, and what is the relief of the bright "islands" that appear to divert the dunes? How high are the mountains of Xanadu and what gradients drive the channels between them? What are the relative and absolute height relations between seas and lakes of different types, and what does this tell us about the "hydro(carbono)logic" cycle of precipitation, evaporation, and surface and subsurface fluid flow?

  8. Roughness effects on thermal-infrared emissivities estimated from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Mushkin, Amit; Danilina, Iryna; Gillespie, Alan R.; Balick, Lee K.; McCabe, Matthew F.

    2007-10-01

    Multispectral thermal-infrared images from the Mauna Loa caldera in Hawaii, USA are examined to study the effects of surface roughness on remotely retrieved emissivities. We find up to a 3% decrease in spectral contrast in ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) 90-m/pixel emissivities due to sub-pixel surface roughness variations on the caldera floor. A similar decrease in spectral contrast of emissivities extracted from MASTER (MODIS/ASTER Airborne Simulator) ~12.5-m/pixel data can be described as a function of increasing surface roughness, which was measured remotely from ASTER 15-m/pixel stereo images. The ratio between ASTER stereo images provides a measure of sub-pixel surface-roughness variations across the scene. These independent roughness estimates complement a radiosity model designed to quantify the unresolved effects of multiple scattering and differential solar heating due to sub-pixel roughness elements and to compensate for both sub-pixel temperature dispersion and cavity radiation on TIR measurements.

  9. Three-channel dynamic photometric stereo: a new method for 4D surface reconstruction and volume recovery

    NASA Astrophysics Data System (ADS)

    Schroeder, Walter; Schulze, Wolfram; Wetter, Thomas; Chen, Chi-Hsien

    2008-08-01

    Three-dimensional (3D) body surface reconstruction is an important field in health care. A popular method for this purpose is laser scanning. However, using Photometric Stereo (PS) to record lumbar lordosis and the surface contour of the back poses a viable alternative due to its lower costs and higher flexibility compared to laser techniques and other methods of three-dimensional body surface reconstruction. In this work, we extended the traditional PS method and proposed a new method for obtaining surface and volume data of a moving object. The principle of traditional Photometric Stereo uses at least three images of a static object taken under different light sources to obtain 3D information of the object. Instead of using normal light, the light sources in the proposed method consist of the RGB-Color-Model's three colors: red, green and blue. A series of pictures taken with a video camera can now be separated into the different color channels. Each set of the three images can then be used to calculate the surface normals as a traditional PS. This method waives the requirement that the object imaged must be kept still as in almost all the other body surface reconstruction methods. By putting two cameras opposite to a moving object and lighting the object with the colored light, the time-varying surface (4D) data can easily be calculated. The obtained information can be used in many medical fields such as rehabilitation, diabetes screening or orthopedics.

  10. Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji

    2017-02-01

    Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.

  11. Crater Morphometry and Crater Degradation on Mercury: Mercury Laser Altimeter (MLA) Measurements and Comparison to Stereo-DTM Derived Results

    NASA Technical Reports Server (NTRS)

    Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.

    2017-01-01

    Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.

  12. A STEREO Survey of Magnetic Cloud Coronal Mass Ejections Observed at Earth in 2008–2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Brian E.; Wu, Chin-Chun; Howard, Russell A.

    We identify coronal mass ejections (CMEs) associated with magnetic clouds (MCs) observed near Earth by the Wind spacecraft from 2008 to mid-2012, a time period when the two STEREO spacecraft were well positioned to study Earth-directed CMEs. We find 31 out of 48 Wind MCs during this period can be clearly connected with a CME that is trackable in STEREO imagery all the way from the Sun to near 1 au. For these events, we perform full 3D reconstructions of the CME structure and kinematics, assuming a flux rope (FR) morphology for the CME shape, considering the full complement ofmore » STEREO and SOHO imaging constraints. We find that the FR orientations and sizes inferred from imaging are not well correlated with MC orientations and sizes inferred from the Wind data. However, velocities within the MC region are reproduced reasonably well by the image-based reconstruction. Our kinematic measurements are used to provide simple prescriptions for predicting CME arrival times at Earth, provided for a range of distances from the Sun where CME velocity measurements might be made. Finally, we discuss the differences in the morphology and kinematics of CME FRs associated with different surface phenomena (flares, filament eruptions, or no surface activity).« less

  13. Charon's Surface in Detail

    NASA Image and Video Library

    2017-07-14

    On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto's largest moon, Charon, was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Charon's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. All feature names on Pluto and Charon are informal. The global mosaic has been overlain with transparent, colorized topography data wherever on their surfaces stereo data is available. Standing out on Charon is the Caleuche Chasma ("C") in the far north, an enormous trough at least 350 kilometers (nearly 220 miles) long, and reaching 14 kilometers (8.5 miles) deep -- more than seven times as deep as the Grand Canyon. https://photojournal.jpl.nasa.gov/catalog/PIA21860

  14. Derivation of planetary topography using multi-image shape-from-shading

    USGS Publications Warehouse

    Lohse, V.; Heipke, C.; Kirk, R.L.

    2006-01-01

    In many cases, the derivation of high-resolution digital terrain models (DTMs) from planetary surfaces using conventional digital image matching is a problem. The matching methods need at least one stereo pair of images with sufficient texture. However, many space missions provide only a few stereo images and planetary surfaces often possess insufficient texture. This paper describes a method for the generation of high-resolution DTMs from planetary surfaces, which has the potential to overcome the described problem. The suggested method, developed by our group, is based on shape-from-shading using an arbitrary number of digital optical images, and is termed "multi-image shape-from-shading" (MI-SFS). The paper contains an explanation of the theory of MI-SFS, followed by a presentation of current results, which were obtained using images from NASA's lunar mission Clementine, and constitute the first practical application with our method using extraterrestrial imagery. The lunar surface is reconstructed under the assumption of different kinds of reflectance models (e.g. Lommel-Seeliger and Lambert). The represented results show that the derivation of a high-resolution DTM of real digital planetary images by means of MI-SFS is feasible. ?? 2006 Elsevier Ltd. All rights reserved.

  15. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  16. Autocorrelation techniques for soft photogrammetry

    NASA Astrophysics Data System (ADS)

    Yao, Wu

    In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.

  17. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo.

    PubMed

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  19. Stereo View of Martian Rock Target 'Funzie'

    NASA Image and Video Library

    2018-02-08

    The surface of the Martian rock target in this stereo image includes small hollows with a "swallowtail" shape characteristic of some gypsum crystals, most evident in the lower left quadrant. These hollows may have resulted from the original crystallizing mineral subsequently dissolving away. The view appears three-dimensional when seen through blue-red glasses with the red lens on the left. The scene spans about 2.5 inches (6.5 centimeters). This rock target, called "Funzie," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The stereo view combines two images taken from slightly different angles by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, with the camera about 4 inches (10 centimeters) above the target. Fig. 1 and Fig. 2 are the separate "right-eye" and "left-eye" images, taken on Jan. 11, 2018, during the 1,932nd Martian day, or sol, of the rover's work on Mars. Right-eye and left-eye images are available at https://photojournal.jpl.nasa.gov/catalog/PIA22212

  20. Composite View from Phoenix Lander

    NASA Image and Video Library

    2009-07-02

    This mosaic of images from the Surface Stereo Imager camera on NASA Phoenix Mars Lander shows several trenches dug by Phoenix, plus a corner of the spacecraft deck and the Martian arctic plain stretching to the horizon.

  1. Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury

    NASA Astrophysics Data System (ADS)

    Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele

    2016-07-01

    The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.

  2. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  3. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  4. Automatic Large-Scalae 3d Building Shape Refinement Using Conditional Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.

    2018-05-01

    Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.

  5. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur across all of the terrestrial planets.

  6. A three-dimensional geological reconstruction of Noctis Labyrinthus slope tectonics from CaSSIS data

    NASA Astrophysics Data System (ADS)

    Massironi, M. M.; Pozzobon, R. P.; Lucchetti, A. L.; Simioni, E. S.; Re, C. R.; Mudrič, T. M.; Pajola, M. P.; Cremonese, G. C.; Pommerol, A. P.; Salese, F. S.; Thomas, N. T.; Mege, D. M.

    2017-09-01

    In November 2016 the CaSSIS (Colour and Stereo Surface Imaging System) imaging system onboard the European Space Agency's ExoMars Trace Gas Orbiter (TGO) acquired 18 images (each composed by 30 framelets for each of the 4 colour channels) of the Martian surface. The first stereo- pairs were taken during the closest approach, at a distance of 520 km from the surface, over the Hebes Chasma and Noctis Labyrithus regions. In the latter case a DTM was prepared over a north facing slope bounding to the north a 2000 m deep depression and to the south a plateau complicated by extensional fault networks. Such slope is characterised by a downthrown block that can be interpreted as a Deep Seated Gravitational Slope Deformation (DSGSD) sensu. In this work we will present a 3D geological reconstruction of the phenomenon that allowed us to constrain the possible main sliding surface, the volumes involved in the gravitational process and the kinematics of the mass movement.

  7. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803

    NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009).

    By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  8. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends.

    This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction.

    Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  9. Happy Mars Solstice!

    NASA Image and Video Library

    2008-06-27

    This image was acquired by NASA Phoenix Mars Lander Surface Stereo Imager SSI in the late afternoon of the 30th Martian day of the mission, or Sol 30 June 25, 2008. This is hours after the beginning of Martian northern summer.

  10. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  11. Phoenix Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Annotated Version

    [figure removed for brevity, see original site] Left-eye view of a stereo pair [figure removed for brevity, see original site] Right-eye view of a stereo pair

    This image is a stereo, panoramic view of various trenches dug by NASA's Phoenix Mars Lander. The images that make up this panorama were taken by Phoenix's Surface Stereo Imager at about 4 p.m., local solar time at the landing site, on the 131st, Martian day, or sol, of the mission (Oct. 7, 2008).

    In figure 1, the trenches are labeled in orange and other features are labeled in blue. Figures 2 and 3 are the left- and right-eye members of a stereo pair.

    For scale, the 'Pet Donkey' trench just to the right of center is approximately 38 centimeters (15 inches) long and 31 to 34 centimeters (12 to 13 inches) wide. In addition, the rock in front of it, 'Headless,' is about 11.5 by 8.5 centimeters (4.5 by 3.3 inches), and about 5 centimeters (2 inches) tall.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.

  13. Phoenix Laser Beam in Action on Mars

    NASA Image and Video Library

    2008-09-30

    The Surface Stereo Imager camera aboard NASA Phoenix Mars Lander acquired a series of images of the laser beam in the Martian night sky. Bright spots in the beam are reflections from ice crystals in the low level ice-fog.

  14. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  15. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  16. False Color Terrain Model of Phoenix Workspace

    NASA Image and Video Library

    2008-05-28

    This is a terrain model of Phoenix Robotic Arm workspace. It has been color coded by depth with a lander model for context. The model has been derived using images from the depth perception feature from Phoenix Surface Stereo Imager SSI.

  17. Phoenix Checks out its Work Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This animation shows a mosaic of images of the workspace reachable by the scoop on the robotic arm of NASA's Phoenix Mars Lander, along with some measurements of rock sizes.

    Phoenix was able to determine the size of the rocks based on three-dimensional views from stereoscopic images taken by the lander's 7-foot mast camera, called the Surface Stereo Imager. The stereo pair of images enable depth perception, much the way a pair of human eyes enable people to gauge the distance to nearby objects.

    The rock measurements were made by a visualization tool known as Viz, developed at NASA's Ames Research Laboratory. The shadow cast by the camera on the Martian surface appears somewhat disjointed because the camera took the images in the mosaic at different times of day.

    Scientists do not yet know the origin or composition of the flat, light-colored rocks on the surface in front of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. CMOS detectors: lessons learned during the STC stereo channel preflight calibration

    NASA Astrophysics Data System (ADS)

    Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.

    2017-09-01

    The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.

  19. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  20. Using digital photogrammetry to constrain the segmentation of Paleocene volcanic marker horizons within the Nuussuaq basin

    NASA Astrophysics Data System (ADS)

    Vest Sørensen, Erik; Pedersen, Asger Ken

    2017-04-01

    Digital photogrammetry is used to map important volcanic marker horizons within the Nuussuaq Basin, West Greenland. We use a combination of oblique stereo images acquired from helicopter using handheld cameras and traditional aerial photographs. The oblique imagery consists of scanned stereo photographs acquired with analogue cameras in the 90´ties and newer digital images acquired with high resolution digital consumer cameras. Photogrammetric software packages SOCET SET and 3D Stereo Blend are used for controlling the seamless movement between stereo-models at different scales and viewing angles and the mapping is done stereoscopically using 3d monitors and the human stereopsis. The approach allows us to map in three dimensions three characteristic marker horizons (Tunoqqu, Kûgánguaq and Qordlortorssuaq Members) within the picritic Vaigat Formation. They formed toward the end of the same volcanic episode and are believed to be closely related in time. They formed an approximately coherent sub-horizontal surface, the Tunoqqu Surface that at the time of formation covered more than 3100 km2 on Disko and Nuussuaq. Our mapping shows that the Tunoqqu Surface is now segmented into areas of different elevation and structural trend as a result of later tectonic deformation. This is most notable on Nuussuaq where the western part is elevated and in parts highly faulted. In western Nuussuaq the surface has been uplifted and faulted so that it now forms an asymmetric anticline. The flanks of the anticline are coincident with two N-S oriented pre-Tunoqqu extensional faults. The deformation of the Tunoqqu surface could be explained by inversion of older extensional faults due to an overall E-W directed compressive regime in the late Paleocene.

  1. Phoenix La Mancha Trench in 3-D

    NASA Image and Video Library

    2008-10-09

    This anaglyph was taken by NASA Phoenix Mars Lander Surface Stereo Imager Oct. 7, 2008. The anaglyph highlights the depth of the trench, informally named La Mancha, and reveals the ice layer beneath the soil surface. 3D glasses are necessary.

  2. An efficient photogrammetric stereo matching method for high-resolution images

    NASA Astrophysics Data System (ADS)

    Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao

    2016-12-01

    Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.

  3. Solar Power Grid Unfurled

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Shown here is one of the first images taken by NASA's Phoenix Mars Lander of one of the octagonal solar panels, which opened like two handheld, collapsible fans on either side of the spacecraft. Beyond this view is a small slice of the north polar terrain of Mars.

    The successfully deployed solar panels are critical to the success of the 90-day mission, as they are the spacecraft's only means of replenishing its power. Even before these images reached Earth, power readings from the spacecraft indicated to engineers that the solar panels were already at work recharging the spacecraft's batteries.

    Before deploying the Surface Stereo Imager to take these images, the lander waited about 15 minutes for the dust to settle.

    This image was taken by the spacecraft's Surface Stereo Imager on Sol, or Martian day, 0 (May 25, 2008).

    This image has been geometrically corrected.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. Applied algorithm in the liner inspection of solid rocket motors

    NASA Astrophysics Data System (ADS)

    Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet

    2018-03-01

    In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.

  5. Stereo using monocular cues within the tensor voting framework.

    PubMed

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  6. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  7. Photometric stereo endoscopy.

    PubMed

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S; Vakoc, Benjamin J; Durr, Nicholas J

    2013-07-01

    While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.

  8. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    NASA Astrophysics Data System (ADS)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  9. Enhancement of Stereo Imagery by Artificial Texture Projection Generated Using a LIDAR

    NASA Astrophysics Data System (ADS)

    Veitch-Michaelis, Joshua; Muller, Jan-Peter; Walton, David; Storey, Jonathan; Foster, Michael; Crutchley, Benjamin

    2016-06-01

    Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  10. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".

  11. Solar Power Grid

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Shown here is one of the first images taken by NASA's Phoenix Mars Lander of one of the octagonal solar panels, which opened like two handheld, collapsible fans on either side of the spacecraft. Beyond this view is a small slice of the north polar terrain of Mars.

    The successfully deployed solar panels are critical to the success of the 90-day mission, as they are the spacecraft's only means of replenishing its power. Even before these images reached Earth, power readings from the spacecraft indicated to engineers that the solar panels were already at work recharging the spacecraft's batteries.

    Before deploying the Surface Stereo Imager to take these images, the lander waited about 15 minutes for the dust to settle.

    This image was taken by the spacecraft's Surface Stereo Imager on Sol, or Martian day, 0 (May 25, 2008).

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. A search for Ganymede stereo images and 3D mapping opportunities

    NASA Astrophysics Data System (ADS)

    Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.

    2017-10-01

    We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.

  13. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    DTIC Science & Technology

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  14. Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC

    NASA Astrophysics Data System (ADS)

    Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.

  15. Opportunity's Surroundings on Sol 1798 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.

  17. Quantitative surface topography assessment of directly compressed and roller compacted tablet cores using photometric stereo image analysis.

    PubMed

    Allesø, Morten; Holm, Per; Carstensen, Jens Michael; Holm, René

    2016-05-25

    Surface topography, in the context of surface smoothness/roughness, was investigated by the use of an image analysis technique, MultiRay™, related to photometric stereo, on different tablet batches manufactured either by direct compression or roller compaction. In the present study, oblique illumination of the tablet (darkfield) was considered and the area of cracks and pores in the surface was used as a measure of tablet surface topography; the higher a value, the rougher the surface. The investigations demonstrated a high precision of the proposed technique, which was able to rapidly (within milliseconds) and quantitatively measure the obtained surface topography of the produced tablets. Compaction history, in the form of applied roll force and tablet punch pressure, was also reflected in the measured smoothness of the tablet surfaces. Generally it was found that a higher degree of plastic deformation of the microcrystalline cellulose resulted in a smoother tablet surface. This altogether demonstrated that the technique provides the pharmaceutical developer with a reliable, quantitative response parameter for visual appearance of solid dosage forms, which may be used for process and ultimately product optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Synergistic surface current mapping by spaceborne stereo imaging and coastal HF radar

    NASA Astrophysics Data System (ADS)

    Matthews, John Philip; Yoshikawa, Yutaka

    2012-09-01

    Well validated optical and radar methods of surface current measurement at high spatial resolution (nominally <100 m) from space can greatly advance our ability to monitor earth's oceans, coastal zones, lakes and rivers. With interest growing in optical along-track stereo techniques for surface current and wave motion determinations, questions of how to interpret such data and how to relate them to measurements made by better validated techniques arise. Here we make the first systematic appraisal of surface currents derived from along-track stereo Sun glitter (ATSSG) imagery through comparisons with simultaneous synoptic flows observed by coastal HF radars working at frequencies of 13.9 and 24.5 MHz, which return averaged currents within surface layers of roughly 1 m and 2 m depth respectively. At our Tsushima Strait (Japan) test site, we found that these two techniques provided largely compatible surface current patterns, with the main difference apparent in current strength. Within the northwest (southern) comparison region, the magnitudes of the ATSSG current vectors derived for 13 August 2006 were on average 22% (40%) higher than the corresponding vectors for the 1-m (2-m) depth radar. These results reflect near-surface vertical current structure, differences in the flow components sensed by the two techniques and disparities in instrumental performance. The vertical profile constructed here from ATSSG, HF radar and ADCP data is the first to resolve downwind drift in the upper 2 m of the open ocean. The profile e-folding depth suggests Stokes drift from waves of 10-m wavelength visible in the images.

  19. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  20. 3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.

    PubMed

    Moses, Yael; Shimshoni, Ilan

    2009-07-01

    We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.

  1. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  2. Prototype tactile feedback system for examination by skin touch.

    PubMed

    Lee, O; Lee, K; Oh, C; Kim, K; Kim, M

    2014-08-01

    Diagnosis of conditions such as psoriasis and atopic dermatitis, in the case of induration, involves palpating the infected area via hands and then selecting a ratings score. However, the score is determined based on the tester's experience and standards, making it subjective. To provide tactile feedback on the skin, we developed a prototype tactile feedback system to simulate skin wrinkles with PHANToM OMNI. To provide the user with tactile feedback on skin wrinkles, a visual and haptic Augmented Reality system was developed. First, a pair of stereo skin images obtained by a stereo camera generates a disparity map of skin wrinkles. Second, the generated disparity map is sent to an implemented tactile rendering algorithm that computes a reaction force according to the user's interaction with the skin image. We first obtained a stereo image of skin wrinkles from the in vivo stereo imaging system, which has a baseline of 50.8 μm, and obtained the disparity map with a graph cuts algorithm. The left image is displayed on the monitor to enable the user to recognize the location visually. The disparity map of the skin wrinkle image sends skin wrinkle information as a tactile response to the user through a haptic device. We successfully developed a tactile feedback system for virtual skin wrinkle simulation by means of a commercialized haptic device that provides the user with a single point of contact to feel the surface roughness of a virtual skin sample. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  4. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool.

    PubMed

    Müller-Linow, Mark; Pinto-Espinosa, Francisco; Scharr, Hanno; Rascher, Uwe

    2015-01-01

    Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. We show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management.

  5. Three-dimensional digital mapping of the optic nerve head cupping in glaucoma

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Ramirez, Manuel; Morales, Jose

    1992-08-01

    Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.

  6. Color View 'Dodo' and 'Baby Bear' Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's Phoenix Mars Lander's Surface Stereo Imager took this image on Sol 14 (June 8, 2008), the 14th Martian day after landing. It shows two trenches dug by Phoenix's Robotic Arm.

    Soil from the right trench, informally called 'Baby Bear,' was delivered to Phoenix's Thermal and Evolved-Gas Analyzer, or TEGA, on Sol 12 (June 6). The following several sols included repeated attempts to shake the screen over TEGA's oven number 4 to get fine soil particles through the screen and into the oven for analysis.

    The trench on the left is informally called 'Dodo' and was dug as a test.

    Each of the trenches is about 9 centimeters (3 inches) wide. This view is presented in approximately true color by combining separate exposures taken through different filters of the Surface Stereo Imager.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Two Holes from Using Rasp in 'Snow White' (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This view from the Surface Stereo Imager on NASA's Phoenix Mars Lander shows a portion of the trench informally named 'Snow White,' with two holes near the top of the image that were produced by the first test use of Phoenix's rasp to collect a sample of icy soil.

    The test was conducted on July 15, 2008, during the 50th Martian day, or sol, since Phoenix landed, and the image was taken later the same day. The two holes are about one centimeter (0.4 inch) apart. The image appears three-dimensional when viewed through blue-red glasses.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  8. Frost on Mars

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows bluish-white frost seen on the Martian surface near NASA's Phoenix Mars Lander. The image was taken by the lander's Surface Stereo Imager on the 131st Martian day, or sol, of the mission (Oct. 7, 2008). Frost is expected to continue to appear in images as fall, then winter approach Mars' northern plains.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  9. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  10. A novel craniotomy simulation system for evaluation of stereo-pair reconstruction fidelity and tracking

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Conley, Rebekah H.; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2016-03-01

    Brain shift compensation using computer modeling strategies is an important research area in the field of image-guided neurosurgery (IGNS). One important source of available sparse data during surgery to drive these frameworks is deformation tracking of the visible cortical surface. Possible methods to measure intra-operative cortical displacement include laser range scanners (LRS), which typically complicate the clinical workflow, and reconstruction of cortical surfaces from stereo pairs acquired with the operating microscopes. In this work, we propose and demonstrate a craniotomy simulation device that permits simulating realistic cortical displacements designed to measure and validate the proposed intra-operative cortical shift measurement systems. The device permits 3D deformations of a mock cortical surface which consists of a membrane made of a Dragon Skin® high performance silicone rubber on which vascular patterns are drawn. We then use this device to validate our stereo pair-based surface reconstruction system by comparing landmark positions and displacements measured with our systems to those positions and displacements as measured by a stylus tracked by a commercial optical system. Our results show a 1mm average difference in localization error and a 1.2mm average difference in displacement measurement. These results suggest that our stereo-pair technique is accurate enough for estimating intra-operative displacements in near real-time without affecting the surgical workflow.

  11. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    PubMed

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  12. Photometric stereo endoscopy

    PubMed Central

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.

    2013-01-01

    Abstract. While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging. PMID:23864015

  13. Building Change Detection in Very High Resolution Satellite Stereo Image Time Series

    NASA Astrophysics Data System (ADS)

    Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.

    2016-06-01

    There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  14. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  15. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  16. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  17. Rock Moved by Mars Lander Arm, Stereo View

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The robotic arm on NASA's Phoenix Mars Lander slid a rock out of the way during the mission's 117th Martian day (Sept. 22, 2008) to gain access to soil that had been underneath the rock.The lander's Surface Stereo Imager took the two images for this stereo view later the same day, showing the rock, called 'Headless,' after the arm pushed it about 40 centimeters (16 inches) from its previous location.

    'The rock ended up exactly where we intended it to,' said Matt Robinson of NASA's Jet Propulsion Laboratory, robotic arm flight software lead for the Phoenix team.

    The arm had enlarged the trench near Headless two days earlier in preparation for sliding the rock into the trench. The trench was dug to about 3 centimeters (1.2 inches) deep. The ground surface between the rock's prior position and the lip of the trench had a slope of about 3 degrees downward toward the trench. Headless is about the size and shape of a VHS videotape.

    The Phoenix science team sought to move the rock in order to study the soil and the depth to subsurface ice underneath where the rock had been.

    This left-eye and right-eye images for this stereo view were taken at about 12:30 p.m., local solar time on Mars. The scene appears three-dimensional when seen through blue-red glasses.The view is to the north northeast of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by JPL, Pasadena, Calif. Spacecraft development was by Lockheed Martin Space Systems, Denver.

  18. Co-registration of Laser Altimeter Tracks with Digital Terrain Models and Applications in Planetary Science

    NASA Technical Reports Server (NTRS)

    Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.

    2013-01-01

    We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.

  19. Snow White Trench After Scraping

    NASA Image and Video Library

    2008-07-24

    This view from the Surface Stereo Imager on NASA Phoenix Mars Lander shows the trench informally named Snow White after a series of scrapings were done in preparation for collecting a sample for analysis from a hard subsurface layer.

  20. Robust surface reconstruction by design-guided SEM photometric stereo

    NASA Astrophysics Data System (ADS)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness of our method.

  1. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  2. The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.

    2002-12-01

    Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!

  3. Stereo reconstruction from multiperspective panoramas.

    PubMed

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.

  4. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  5. Accuracy assessment of digital surface models based on WorldView-2 and ADS80 stereo remote sensing data.

    PubMed

    Hobi, Martina L; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of -0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of -0.43 m for the herb and grass vegetation and -0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of -1.85 m for the WorldView-2 GCP-enhanced RPCs model and -1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.

  6. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854.

    West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.'

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  7. THE THOMSON SURFACE. I. REALITY AND MYTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, T. A.; DeForest, C. E., E-mail: howard@boulder.swri.edu

    2012-06-20

    The solar corona and heliosphere are visible via sunlight that is Thomson-scattered off free electrons and detected by coronagraphs and heliospheric imagers. It is well known that these instruments are most responsive to material at the 'Thomson surface', the sphere with a diameter passing through both the observer and the Sun. It is less well known that in fact the Thomson scattering efficiency is minimized on the Thomson surface. Unpolarized heliospheric imagers such as STEREO/HI are thus approximately equally responsive to material over more than a 90 Degree-Sign range of solar exit angles at each given position in the imagemore » plane. We call this range of angles the 'Thomson plateau'. We observe that heliospheric imagers are actually more sensitive to material far from the Thomson surface than close to it, at a fixed radius from the Sun. We review the theory of Thomson scattering as applied to heliospheric imaging, feature detection in the presence of background noise, geometry inference, and feature mass measurement. We show that feature detection is primarily limited by observing geometry and field of view, that the highest sensitivity for detection of density features is to objects close to the observer, that electron surface density inference is independent of geometry across the Thomson plateau, and that mass inference varies with observer distance in all geometries. We demonstrate the sensitivity results with a few examples of features detected by STEREO, far from the Thomson surface.« less

  8. Three-dimensional online surface reconstruction of augmented fluorescence lifetime maps using photometric stereo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura

    2017-02-01

    Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.

  9. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  10. The Role of Amodal Surface Completion in Stereoscopic Transparency

    PubMed Central

    Anderson, Barton L.; Schmid, Alexandra C.

    2012-01-01

    Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829

  11. Relation Between the 3D-Geometry of the Coronal Wave and Associated CME During the 26 April 2008 Event

    NASA Technical Reports Server (NTRS)

    Temmer, M.; Veronig, A. M.; Gopalswamy, N.; Yashiro, S.

    2011-01-01

    We study the kinematical characteristics and 3D geometry of a large-scale coronal wave that occurred in association with the 26 April 2008 flare-CME event. The wave was observed with the EUVI instruments aboard both STEREO spacecraft (STEREO-A and STEREO-B) with a mean speed of approx 240 km/s. The wave is more pronounced in the eastern propagation direction, and is thus, better observable in STEREO-B images. From STEREO-B observations we derive two separate initiation centers for the wave, and their locations fit with the coronal dimming regions. Assuming a simple geometry of the wave we reconstruct its 3D nature from combined STEREO-A and STEREO-B observations. We find that the wave structure is asymmetric with an inclination toward East. The associated CME has a deprojected speed of approx 750 +/- 50 km/s, and it shows a non-radial outward motion toward the East with respect to the underlying source region location. Applying the forward fitting model developed by Thernisien, Howard, and Vourlidas we derive the CME flux rope position on the solar surface to be close to the dimming regions. We conclude that the expanding flanks of the CME most likely drive and shape the coronal wave.

  12. GeoComplexity and scale: surface processes and remote sensing of geosystems. GeoComplexity and scale: surface processes and remote sensing of geosystems

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter

    2015-04-01

    Understanding the role of scaling in different planetary surface processes within our Solar System is one of the fundamental goals of planetary and solid earth scientific research. There has been a revolution in planetary surface observations over the past decade for the Earth, Mars and the Moon, especially in 3D imaging of surface shape (from the planetary scale down to resolutions of 75cm). I will examine three areas that I have been active in over the last 25 years giving examples of newly processed global datasets ripe for scaling analysis: topography, BRDF/albedo and imaging. For understanding scaling in terrestrial land surface topography we now have global 30m digital elevation models (DEMs) from different types of sensors (InSAR and stereo-optical) along with laser altimeter data to provide global reference models (to better than 1m in cross-over areas) and airborne laser altimeter data over small areas at resolutions better than 1m and height accuracies better than 10-15cm. We also have an increasing number of sub-surface observations from long wavelength SAR in arid regions, which will allow us to look at the true surface rather than the one buried by sand. We also still have a major limitation of these DEMs in that they represent an unknown observable surface with C-band InSAR DEMs representing being somewhere near the top of the canopy and X-band InSAR and stereo near the top of the canopy but only P-band representing the true understorey surface. I will present some of the recent highlights of topography on Mars including 3D modelling of surface shape from the ESA Mars Express HRSC (High Resolution Stereo Camera), see [1], [2] at 30-100m grid-spacing; and then co-registered to HRSC using a resolution cascade of 20m DTMs from NASA MRO stereo-CTX and 0.75m digital terrain models (as there is no land cover on Mars) DTMs from MRO stereo-HiRISE [3]. Comparable DTMs now exist for the Moon from 100m up to 1m. I will show examples of these DEM/DTM datasets along with some simple analyses of their scaling properties. Global 1km, 8-daily terrestrial land surface BRDF/albedo maps exist for US sensors from MODIS and by orbit from MISR. More recently, the ESA GlobAlbedo project [4] has produced land surface datasets on the same spatio-temporal sampling using optimal estimation with full uncertainty matrices associated with each and every 1km pixel. By exploiting these uncertainty estimates I show how upscaling can be performed as well as analysing their scaling properties. Recently, a very novel technique for the super-resolution restoration (SRR) of stacks of images has been developed at UCL [5]. First examples shown will be of the entire MER-A Spirit rover traverse taking a stack of 25cm HiRISE to generate a corridor of SRR images along the rover traverse of 5cm imagery of unresolved features such as rocks, created as a consequence of meteoritic bombardment, ridge and valley features. This SRR technique will allow us for ≈400 areas on Mars (where 5 or more HiRISE images have been captured) and similar numbers on the Moon to resolve sub-pixel features. Examples will be shown of how these SRR images can be employed to assist with the better understanding of surface geomorphology. Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under PRoViDE grant agreement n˚312377 and the ESA GlobAlbedo project. Partial support is also provided from the STFC "MSSL Consolidated Grant" ST/K000977/1. References: [1] Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; [2] Gwinner, K., Muller, J-P., et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). Geophysical Research Abstracts, Vol. 17, EGU2015-13832; [3] Kim, J., & Muller, J. (2009). Multi-resolution topographic data extraction from Martian stereo imagery. Planetary and Space Science, 57, 2095-2112. doi:10.1016/j.pss.2009.09.024; [4] Muller, J.-P., et al. (2011), The ESA GlobAlbedo Project for mapping the Earth's land surface albedo for 15 Years from European Sensors., Geophysical Research Abstracts, Vol. 13, EGU2011-10969; [5] Tao, Y., Muller, J.-P. (2015) Supporting lander and rover operation: a novel super-resolution restoration technique. Geophysical Research Abstracts, Vol. 17, EGU2015-6925

  13. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  14. The Longitudinal Properties of a Solar Energetic Particle Event Investigated Using Modern Solar Imaging

    NASA Technical Reports Server (NTRS)

    Rouillard, A. P.; Sheeley, N.R. Jr.; Tylka, A.; Vourlidas, A.; Ng, C. K.; Rakowski, C.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Reames, D.; hide

    2012-01-01

    We use combined high-cadence, high-resolution, and multi-point imaging by the Solar-Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory to investigate the hour-long eruption of a fast and wide coronal mass ejection (CME) on 2011 March 21 when the twin STEREO spacecraft were located beyond the solar limbs. We analyze the relation between the eruption of the CME, the evolution of an Extreme Ultraviolet (EUV) wave, and the onset of a solar energetic particle (SEP) event measured in situ by the STEREO and near-Earth orbiting spacecraft. Combined ultraviolet and white-light images of the lower corona reveal that in an initial CME lateral "expansion phase," the EUV disturbance tracks the laterally expanding flanks of the CME, both moving parallel to the solar surface with speeds of approx 450 km/s. When the lateral expansion of the ejecta ceases, the EUV disturbance carries on propagating parallel to the solar surface but devolves rapidly into a less coherent structure. Multi-point tracking of the CME leading edge and the effects of the launched compression waves (e.g., pushed streamers) give anti-sunward speeds that initially exceed 900 km/s at all measured position angles. We combine our analysis of ultraviolet and white-light images with a comprehensive study of the velocity dispersion of energetic particles measured in situ by particle detectors located at STEREO-A (STA) and first Lagrange point (L1), to demonstrate that the delayed solar particle release times at STA and L1 are consistent with the time required (30-40 minutes) for the CME to perturb the corona over a wide range of longitudes. This study finds an association between the longitudinal extent of the perturbed corona (in EUV and white light) and the longitudinal extent of the SEP event in the heliosphere.

  15. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  16. Characterization of Stereo Vision Performance for Roving at the Lunar Poles

    NASA Technical Reports Server (NTRS)

    Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry

    2016-01-01

    Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.

  17. The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity

    NASA Astrophysics Data System (ADS)

    Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason

    2016-07-01

    NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km-wide swath, generating DTMs with 32-m spatial scale and 4-m vertical precision. These data also support characterization of surface clutter for interpretation of radar deep and shallow sounding modes. Detectors: The cameras have identical rapid-readout, radiation-hard 4k x 2k CMOS detectors and can image in both pushbroom and framing modes. Color observations are acquired by pushbroom imaging using six broadband filters (~300-1050 nm), allowing mapping of surface units for correlation with geologic structures, topography, and compositional units from other instruments.

  18. Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.

    PubMed

    Huck, F O; Wall, S D

    1976-07-01

    Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.

  19. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  20. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    PubMed Central

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  1. Novel 3D imaging techniques for improved understanding of planetary surface geomorphology.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the past decade for Mars and the Moon, especially in 3D imaging of surface shape (down to resolutions of 75cm) and subsequent correction for terrain relief of imagery from orbiting and co-registration of lander and rover robotic images. We present some of the recent highlights including 3D modelling of surface shape from the ESA Mars Express HRSC (High Resolution Stereo Camera), see [1], [2] at 30-100m grid-spacing; and then co-registered to HRSC using a resolution cascade of 20m DTMs from NASA MRO stereo-CTX and 0.75m DTMs from MRO stereo-HiRISE [3]. This has opened our eyes to the formation mechanisms of megaflooding events, such as the formation of Iani Vallis and the upstream blocky terrain, to crater lakes and receding valley cuts [4]. A comparable set of products is now available for the Moon from LROC-WA at 100m [5] and LROC-NA at 1m [6]. Recently, a very novel technique for the super-resolution restoration (SRR) of stacks of images has been developed at UCL [7]. First examples shown will be of the entire MER-A Spirit rover traverse taking a stack of 25cm HiRISE to generate a corridor of SRR images along the rover traverse of 5cm imagery of unresolved features such as rocks, created as a consequence of meteoritic bombardment, ridge and valley features. This SRR technique will allow us for ˜400 areas on Mars (where 5 or more HiRISE images have been captured) and similar numbers on the Moon to resolve sub-pixel features. Examples will be shown of how these SRR images can be employed to assist with the better understanding of surface geomorphology. Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under PRoViDE grant agreement n° 312377. Partial support is also provided from the STFC 'MSSL Consolidated Grant' ST/K000977/1. References: [1] Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; [2] Gwinner, K., F. et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). Geophysical Research Abstracts, Vol. 17, EGU2015-13832; [3] Kim, J., & Muller, J. (2009). Multi-resolution topographic data extraction from Martian stereo imagery. Planetary and Space Science, 57, 2095-2112. doi:10.1016/j.pss.2009.09.024; [4] Warner, N. H., Gupta, S., Kim, J.-R., Muller, J.-P., Le Corre, L., Morley, J., et al. (2011). Constraints on the origin and evolution of Iani Chaos, Mars. Journal of Geophysical Research, 116(E6), E06003. doi:10.1029/2010JE003787; [5] Fok, H. S., Shum, C. K., Yi, Y., Araki, H., Ping, J., Williams, J. G., et al. (2011). Accuracy assessment of lunar topography models. Earth Planets Space, 63, 15-23. doi:10.5047/eps.2010.08.005; [6] Haase, I., Oberst, J., Scholten, F., Wählisch, M., Gläser, P., Karachevtseva, I., & Robinson, M. S. (2012). Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography - Haase - 2012 - Journal of Geophysical Research: Planets (1991-2012). Journal of Geophysical Research, 117, E00H20. doi:10.1029/2011JE003908; [7] Tao, Y., Muller, J.-P. (2015) Supporting lander and rover operation: a novel super-resolution restoration technique. Geophysical Research Abstracts, Vol. 17, EGU2015-6925

  2. Global Patch Matching

    NASA Astrophysics Data System (ADS)

    Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.

    2017-09-01

    This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.

  3. Operation and performance of the mars exploration rover imaging system on the martian surface

    USGS Publications Warehouse

    Maki, J.N.; Litwin, T.; Schwochert, M.; Herkenhoff, K.

    2005-01-01

    The Imaging System on the Mars Exploration Rovers has successfully operated on the surface of Mars for over one Earth year. The acquisition of hundreds of panoramas and tens of thousands of stereo pairs has enabled the rovers to explore Mars at a level of detail unprecedented in the history of space exploration. In addition to providing scientific value, the images also play a key role in the daily tactical operation of the rovers. The mobile nature of the MER surface mission requires extensive use of the imaging system for traverse planning, rover localization, remote sensing instrument targeting, and robotic arm placement. Each of these activity types requires a different set of data compression rates, surface coverage, and image acquisition strategies. An overview of the surface imaging activities is provided, along with a summary of the image data acquired to date. ?? 2005 IEEE.

  4. Fusion of Laser Altimetry Data with Dems Derived from Stereo Imaging Systems

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B. M.; Duncan, K.

    2016-06-01

    During the last two decades surface elevation data have been gathered over the Greenland Ice Sheet (GrIS) from a variety of different sensors including spaceborne and airborne laser altimetry, such as NASA's Ice Cloud and land Elevation Satellite (ICESat), Airborne Topographic Mapper (ATM) and Laser Vegetation Imaging Sensor (LVIS), as well as from stereo satellite imaging systems, most notably from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Worldview. The spatio-temporal resolution, the accuracy, and the spatial coverage of all these data differ widely. For example, laser altimetry systems are much more accurate than DEMs derived by correlation from imaging systems. On the other hand, DEMs usually have a superior spatial resolution and extended spatial coverage. We present in this paper an overview of the SERAC (Surface Elevation Reconstruction And Change detection) system, designed to cope with the data complexity and the computation of elevation change histories. SERAC simultaneously determines the ice sheet surface shape and the time-series of elevation changes for surface patches whose size depends on the ruggedness of the surface and the point distribution of the sensors involved. By incorporating different sensors, SERAC is a true fusion system that generates the best plausible result (time series of elevation changes) a result that is better than the sum of its individual parts. We follow this up with an example of the Helmheim gacier, involving ICESat, ATM and LVIS laser altimetry data, together with ASTER DEMs.

  5. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  6. New opportunities in planetary geomorphology: an assessment of the capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on The Exomars Trace Gas Orbiter through Image Simulation.

    NASA Astrophysics Data System (ADS)

    Tornabene, Livio Leonardo; Seelos, Frank; Pommerol, Antoine; Thomas, Nick; Caudill, Christy; Conway, Susan J.

    2017-04-01

    The Colour and Stereo Surface Imaging System (CaSSIS) is a full-colour visible to near-infrared (VNIR) bi-directional pushframe stereo camera onboard the ExoMars 2016 Trace Gas Orbiter (TGO). For more details on ExoMars TGO and its payload, please see [4], and for the CaSSIS instrument see [1]. For details on the first Mars Capture Orbit (MCO)-acquired CaSSIS stereo images and preliminary 3D reconstructions from them [5]. CaSSIS will provide full-colour, stereo and repeat imaging spanning different times of day and covering all seasons. Such images will be used to address the following objectives: 1) characterizing possible [surface/subsurface] sources for methane and other trace gases; 2) investigating dynamic surface processes that may contribute to atmospheric gases; and 3) certifying and characterizing candidate landing site safety and hazards (e.g., rocks, slopes, etc.). Here we present a summary, and some highlights, based on the creation and analysis of simulated CaSSIS image cubes [see 2, 3]. We generated simulated images that are spatially (4.6 m/px) and spectrally (4-bands) consistent with CaSSIS from existing Mars Reconnaissance Orbiter (MRO) datasets. Simulated CaSSIS colours were generated from hyperspectral VNIR (S-detector) data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) after the methods of [6], which were then combined with spatially oversampled and resampled 32-bit calibrated I/F images from the Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) [2, 3]. For more of the details on the simulation process and the various products produced please see [2, 3]. Our simulations show that such colour coverage will be particularly valuable towards facilitating and enhancing seasonal process and change detection studies. For example, a simulation image of Gasa crater demonstrates exactly how additional colour context would facilitate gully change detections that can be subtle and difficult to detect in single-band images, or when missed by the HiRISE colour swath. Another result based on our colour analysis includes, excellent separation of ferrous- and ferric-bearing surface materials provided by band ratio colour composite images utilizing the two NIR bands of CaSSIS (3RED, 4NIR). These images will be particularly useful for associating CaSSIS colour units with spectral units defined by orbiting spectrometers (e.g., CRISM), and thereby extend spectral mapping to CaSSIS spatial scales. This will particularly be beneficial for landing sites where it is difficult to achieve continuous colour coverage with HiRISE. Our analysis shows that dune movement can be detected at the scale of CaSSIS, given a long enough baseline. Other results include resolving: 1) larger individual or sets of Recurring Slope Lineae (RSL), 2) small impacts (including ice excavators), and 3) surface changes associated with landers/rovers (NOTE: lander/rovers and their tracks are not resolvable). References: [1] Thomas N. et al. (2016), submitted to SSR. [2] Tornabene L. et al. (2017), submitted to SSR. [3] Tornabene L. et al. (2016) LPSC 47, Abstract #2695. [4] Vago J. et al. (2015) SSR, 49 518-528. [5] Cremonese G. et al. (2017) LPSC 48. [6] Seelos F. et al. (2011) AGU Fall, vol. 23, Abstract #1714. [7] Delamere A. et al. (2010), Icarus, 205, 38-52. Acknowledgements: The authors wish to thank the spacecraft and instrument engineering teams for the successful completion of the instrument. CaSSIS is a project of the University of Bern and funded through the Swiss Space Office via ESA's PRODEX programme. The instrument hardware development was also supported by the Italian Space Agency (ASI) (ASI-INAF agreement no.I/018/12/0), INAF/Astronomical Observatory of Padova, and the Space Research Center (CBK) in Warsaw. Support from SGF (Budapest), the University of Arizona Lunar and Planetary Laboratory, and NASA are also gratefully acknowledged. The lead author also acknowledges personal Canadian-based support from the Canadian Space Agency (CSA), and the NSERC DG programme.

  7. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.

  8. Sedna Planitia (Right Member of a Synthetic Stereo Pair)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This perspective view of Venus, generated by computer from Magellan data and color-coded with emissivity, shows part of the lowland plains in Sedna Planitia. Circular depressions with associated fracture patterns, called 'coronae', are apparently unique to the lowlands of Venus, and tend to occur in linear clusters along the planet's major tectonic belts, as seen in this image. Coronae differ greatly in size and detailed morphology: the central depression may or may not lie below the surrounding plains, and may or may not be surrounded by a raised rim or a moat outside the rim. Coronae are thought to be caused by localized 'hot spot' magmatic activity in Venus' subsurface. Intrusion of magma into the crust first pushes up the surface, after which cooling and contraction create the central depression and generate a pattern of concentric fractures. In some cases, lava may be extruded onto the surface, as seen here as bright flows in the foreground. This image is the right member of a synthetic stereo pair; the other image is PIA00313. To view the region in stereo, download the two images, arrange them side by side on the screen or in hardcopy, and view this image with the right eye and the other with the left. For best viewing, use a stereoscope or size the images so that their width is close to the interpupillary distance, about 6.6 cm (2.6 inches). Magellan MIDR quadrangle* containing this image: C1- 45N011. Image resolution (m): 225. Size of region shown (E-W x N-S, in km): 1900 x 120 at front edge. Range of emissivities from violet to red: 0.82 -- 0.88. Vertical exaggeration: 20. Azimuth of viewpoint (deg clockwise from East): 13. Elevation of viewpoint (km): 300. *Quadrangle name indicates approximate center latitude (N=north, S=south) and center longitude (East).

  9. Sedna Planitia (Left Member of a Synthetic Stereo Pair)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This perspective view of Venus, generated by computer from Magellan data and color-coded with emissivity, shows part of the lowland plains in Sedna Planitia. Circular depressions with associated fracture patterns, called 'coronae', are apparently unique to the lowlands of Venus, and tend to occur in linear clusters along the planet's major tectonic belts, as seen in this image. Coronae differ greatly in size and detailed morphology: the central depression may or may not lie below the surrounding plains, and may or may not be surrounded by a raised rim or a moat outside the rim. Coronae are thought to be caused by localized 'hot spot' magmatic activity in Venus' subsurface. Intrusion of magma into the crust first pushes up the surface, after which cooling and contraction create the central depression and generate a pattern of concentric fractures. In some cases, lava may be extruded onto the surface, as seen here as bright flows in the foreground. This image is the left member of a synthetic stereo pair; the other image is PIA00314. To view the region in stereo, download the two images, arrange them side by side on the screen or in hardcopy, and view this image with the left eye and the other with the right. For best viewing, use a stereoscope or size the images so that their width is close to the interpupillary distance, about 6.6 cm (2.6 inches). Magellan MIDR quadrangle* containing this image: C1-45N011. Image resolution (m): 225. Size of region shown (E-W x N-S, in km): 1900 x 120 at front edge. Range of emissivities from violet to red: 0.82 -- 0.88. Vertical exaggeration: 20. Azimuth of viewpoint (deg clockwise from East): 13. Elevation of viewpoint (km): 300. *Quadrangle name indicates approximate center latitude (N=north, S=south) and center longitude (East).

  10. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  11. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  12. Clinical applications of modern imaging technology: stereo image formation and location of brain cancer

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-05-01

    It is very important to locate the tumor for a patient, who has cancer in his brain. If he only gets X-CT or MRI pictures, the doctor does not know the size, shape location of the tumor and the relation between the tumor and other organs. This paper presents the formation of stereo images of cancer. On the basis of color code and color 3D reconstruction. The stereo images of tumor, brain and encephalic truncus are formed. The stereo image of cancer can be round on X, Y, Z-coordinates to show the shape from different directions. In order to show the location of tumor, stereo image of tumor and encephalic truncus are provided on different angles. The cross section pictures are also offered to indicate the relation of brain, tumor and encephalic truncus on cross sections. In this paper the calculating of areas, volume and the space between cancer and the side of the brain are also described.

  13. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  14. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  15. Massive stereo-based DTM production for Mars on cloud computers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  16. Double biprism arrays design using for stereo-photography of mobile phone camera

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Chu, Pu-Yi; Chao, Yu-Hao; Pan, Jui-Wen; Tien, Chuen-Lin

    2016-11-01

    Generally, mobile phone use one camera to catch the image, and it is hard to get stereo image pair. Adding a biprism array can help that get the image pair easily. So users can use their mobile phone to catch the stereo image anywhere by adding a biprism array, and if they want to get a normal image just remove it. Using biprism arrays will induce chromatic aberration. Therefore, we design a double biprism arrays to reduce chromatic aberration.

  17. Digital Elevation Models of Patterned Ground in the Canadian Arctic and Implications for the Study of Mars

    NASA Astrophysics Data System (ADS)

    Knightly, P.; Murakami, Y.; Clarke, J.; Sizemore, H.; Siegler, M.; Rupert, S.; Chevrier, V.

    2017-12-01

    Patterned ground forms in periglacial zones from both expansion and contraction of permafrost by freeze-thaw and sub-freezing temperature changes and has been observed on both Earth and Mars from orbital and the surface at the Phoneix and Viking 2 landing sites. The Phoenix mission to Mars studied patterned ground in the vicinity of the spacecraft including the excavation of a trench revealing water permafrost beneath the surface. A study of patterned ground at the Haughton Impact structure on Devon Island used stereo-pair imaging and three-dimensional photographic models to catalog the type and occurrence of patterned ground in the study area. This image catalog was then used to provide new insight into photographic observations gathered by Phoenix. Stereo-pair imagery has been a valuable geoscience tool for decades and it is an ideal tool for comparative planetary geology studies. Stereo-pair images captured on Devon Island were turned into digital elevation models (DEMs) and comparisons were noted between the permafrost and patterned ground environment of Earth and Mars including variations in grain sorting, active layer thickness, and ice table depth. Recent advances in 360° cameras also enabled the creation of a detailed, immersive site models of patterned ground at selected sites in Haughton crater on Devon Island. The information from this ground truth study will enable the development and refinement of existing models to better evaluate patterned ground on Mars and predict its evolution.

  18. Wild 2 Features

    NASA Image and Video Library

    2004-06-17

    These images taken by NASA's Stardust spacecraft highlight the diverse features that make up the surface of comet Wild 2, showing a variety of small pinnacles and mesas seen on the limb of the comet and the location of a 2-kilometer (1.2-mile) series of aligned scarps, or cliffs, that are best seen in the stereo images. http://photojournal.jpl.nasa.gov/catalog/PIA06284

  19. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.

  20. Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Marchant, W.

    2011-12-01

    The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.

  1. Stereoscopy and the Human Visual System

    PubMed Central

    Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.

    2012-01-01

    Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596

  2. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  3. Left Limb of North Pole of the Sun, March 20, 2007 (Anaglyph)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: Left eye view of a stereo pair Click on the image for full resolution TIFF Figure 2: Right eye view of a stereo pair Click on the image for full resolution TIFF Figure 1: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun. This location enables us to view the Sun from the position of a virtual left eye in space. Figure 2: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-A spacecraft. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space.

    NASA's Solar TErrestrial RElations Observatory (STEREO) satellites have provided the first three-dimensional images of the Sun. For the first time, scientists will be able to see structures in the Sun's atmosphere in three dimensions. The new view will greatly aid scientists' ability to understand solar physics and thereby improve space weather forecasting.

    This image is a composite of left and right eye color image pairs taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B and STEREO-A spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual left eye in space. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space.

    The EUVI imager is sensitive to wavelengths of light in the extreme ultraviolet portion of the spectrum. EUVI bands at wavelengths of 304, 171 and 195 Angstroms have been mapped to the red blue and green visible portion of the spectrum; and processed to emphasize the three-dimensional structure of the solar material.

    STEREO, a two-year mission, launched October 2006, will provide a unique and revolutionary view of the Sun-Earth System. The two nearly identical observatories -- one ahead of Earth in its orbit, the other trailing behind -- will trace the flow of energy and matter from the Sun to Earth. They will reveal the 3D structure of coronal mass ejections; violent eruptions of matter from the sun that can disrupt satellites and power grids, and help us understand why they happen. STEREO will become a key addition to the fleet of space weather detection satellites by providing more accurate alerts for the arrival time of Earth-directed solar ejections with its unique side-viewing perspective.

    STEREO is the third mission in NASA's Solar Terrestrial Probes program within NASA's Science Mission Directorate, Washington. The Goddard Science and Exploration Directorate manages the mission, instruments, and science center. The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., designed and built the spacecraft and is responsible for mission operations. The imaging and particle detecting instruments were designed and built by scientific institutions in the U.S., UK, France, Germany, Belgium, Netherlands, and Switzerland. JPL is a division of the California Institute of Technology in Pasadena.

  4. Combined DEM Extration Method from StereoSAR and InSAR

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.

    2015-06-01

    A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.

  5. Opportunity's Surroundings on Sol 1687 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses.

    Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction.

    Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast.

    This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  6. Test Image of Earth Rocks by Mars Camera Stereo

    NASA Image and Video Library

    2010-11-16

    This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.

  7. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  8. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  9. Utility of Digital Stereo Images for Optic Disc Evaluation

    PubMed Central

    Ying, Gui-shuang; Pearson, Denise J.; Bansal, Mayank; Puri, Manika; Miller, Eydie; Alexander, Judith; Piltz-Seymour, Jody; Nyberg, William; Maguire, Maureen G.; Eledath, Jayan; Sawhney, Harpreet

    2010-01-01

    Purpose. To assess the suitability of digital stereo images for optic disc evaluations in glaucoma. Methods. Stereo color optic disc images in both digital and 35-mm slide film formats were acquired contemporaneously from 29 subjects with various cup-to-disc ratios (range, 0.26–0.76; median, 0.475). Using a grading scale designed to assess image quality, the ease of visualizing optic disc features important for glaucoma diagnosis, and the comparative diameters of the optic disc cup, experienced observers separately compared the primary digital stereo images to each subject's 35-mm slides, to scanned images of the same 35-mm slides, and to grayscale conversions of the digital images. Statistical analysis accounted for multiple gradings and comparisons and also assessed image formats under monoscopic viewing. Results. Overall, the quality of primary digital color images was judged superior to that of 35-mm slides (P < 0.001), including improved stereo (P < 0.001), but the primary digital color images were mostly equivalent to the scanned digitized images of the same slides. Color seemingly added little to grayscale optic disc images, except that peripapillary atrophy was best seen in color (P < 0.0001); both the nerve fiber layer (P < 0.0001) and the paths of blood vessels on the optic disc (P < 0.0001) were best seen in grayscale. The preference for digital over film images was maintained under monoscopic viewing conditions. Conclusions. Digital stereo optic disc images are useful for evaluating the optic disc in glaucoma and allow the application of advanced image processing applications. Grayscale images, by providing luminance distinct from color, may be informative for assessing certain features. PMID:20505199

  10. Stereo Refractive Imaging of Breaking Free-Surface Waves in the Surf Zone

    NASA Astrophysics Data System (ADS)

    Mandel, Tracy; Weitzman, Joel; Koseff, Jeffrey; Environmental Fluid Mechanics Laboratory Team

    2014-11-01

    Ocean waves drive the evolution of coastlines across the globe. Wave breaking suspends sediments, while wave run-up, run-down, and the undertow transport this sediment across the shore. Complex bathymetric features and natural biotic communities can influence all of these dynamics, and provide protection against erosion and flooding. However, our knowledge of the exact mechanisms by which this occurs, and how they can be modeled and parameterized, is limited. We have conducted a series of controlled laboratory experiments with the goal of elucidating these details. These have focused on quantifying the spatially-varying characteristics of breaking waves and developing more accurate techniques for measuring and predicting wave setup, setdown, and run-up. Using dynamic refraction stereo imaging, data on free-surface slope and height can be obtained over an entire plane. Wave evolution is thus obtained with high spatial precision. These surface features are compared with measures of instantaneous turbulence and mean currents within the water column. We then use this newly-developed ability to resolve three-dimensional surface features over a canopy of seagrass mimics, in order to validate theoretical formulations of wave-vegetation interactions in the surf zone.

  11. Color-encoded distance for interactive focus positioning in laser microsurgery

    NASA Astrophysics Data System (ADS)

    Schoob, Andreas; Kundrat, Dennis; Lekon, Stefan; Kahrs, Lüder A.; Ortmaier, Tobias

    2016-08-01

    This paper presents a real-time method for interactive focus positioning in laser microsurgery. Registration of stereo vision and a surgical laser is performed in order to combine surgical scene and laser workspace information. In particular, stereo image data is processed to three-dimensionally reconstruct observed tissue surface as well as to compute and to highlight its intersection with the laser focal range. Regarding the surgical live view, three augmented reality concepts are presented providing visual feedback during manual focus positioning. A user study is performed and results are discussed with respect to accuracy and task completion time. Especially when using color-encoded distance superimposed to the live view, target positioning with sub-millimeter accuracy can be achieved in a few seconds. Finally, transfer to an intraoperative scenario with endoscopic human in vivo and cadaver images is discussed demonstrating the applicability of the image overlay in laser microsurgery.

  12. An Approach to 3d Digital Modeling of Surfaces with Poor Texture by Range Imaging Techniques. `SHAPE from Stereo' VS. `SHAPE from Silhouette' in Digitizing Jorge Oteiza's Sculptures

    NASA Astrophysics Data System (ADS)

    García Fernández, J.; Álvaro Tordesillas, A.; Barba, S.

    2015-02-01

    Despite eminent development of digital range imaging techniques, difficulties persist in the virtualization of objects with poor radiometric information, in other words, objects consisting of homogeneous colours (totally white, black, etc.), repetitive patterns, translucence, or materials with specular reflection. This is the case for much of the Jorge Oteiza's works, particularly in the sculpture collection of the Museo Fundación Jorge Oteiza (Navarra, Spain). The present study intend to analyse and asses the performance of two digital 3D-modeling methods based on imaging techniques, facing cultural heritage in singular cases, determined by radiometric characteristics as mentioned: Shape from Silhouette and Shape from Stereo. On the other hand, the text proposes the definition of a documentation workflow and presents the results of its application in the collection of sculptures created by Oteiza.

  13. Animation of Panorama of Phoenix Landing Area Looking Southeast

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This is an animation of panoramic images taken by NASA's Phoenix Mars Lander's Surface Stereo Imager on Sol 15 (June 9, 2008), the 15th Martian day after landing. The panorama looks to the southeast and shows rocks casting shadows, polygons on the surface and as the image looks to the horizon, Phoenix's backshell gleams in the distance.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  14. The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.

    PubMed

    Chandraker, Manmohan

    2016-07-01

    Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.

  15. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  16. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  17. Pluto's Surface in Detail

    NASA Image and Video Library

    2017-07-14

    On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Pluto's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. Examples of large-scale topographic features on Pluto include the vast expanse of very flat, low-elevation nitrogen ice plains of Sputnik Planitia ("P") -- note that all feature names in the Pluto system are informal -- and, on the eastern edge of the encounter hemisphere, the aligned, high-elevation ridges of Tartarus Dorsa ("T") that host the enigmatic bladed terrain, mountains, possible cryovolcanos, canyons, craters and more. https://photojournal.jpl.nasa.gov/catalog/PIA21861

  18. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  19. Transparent volume imaging

    NASA Astrophysics Data System (ADS)

    Wixson, Steve E.

    1990-07-01

    Transparent Volume Imaging began with the stereo xray in 1895 and ended for most investigators when radiation safety concerns eliminated the second view. Today, similiar images can be generated by the computer without safety hazards providing improved perception and new means of image quantification. A volumetric workstation is under development based on an operational prototype. The workstation consists of multiple symbolic and numeric processors, binocular stereo color display generator with large image memory and liquid crystal shutter, voice input and output, a 3D pointer that uses projection lenses so that structures in 3 space can be touched directly, 3D hard copy using vectograph and lenticular printing, and presentation facilities using stereo 35mm slide and stereo video tape projection. Volumetric software includes a volume window manager, Mayo Clinic's Analyze program and our Digital Stereo Microscope (DSM) algorithms. The DSM uses stereo xray-like projections, rapidly oscillating motion and focal depth cues such that detail can be studied in the spatial context of the entire set of data. Focal depth cues are generated with a lens and apeture algorithm that generates a plane of sharp focus, and multiple stereo pairs each with a different plane of sharp focus are generated and stored in the large memory for interactive selection using a physical or symbolic depth selector. More recent work is studying non-linear focussing. Psychophysical studies are underway to understand how people perce ive images on a volumetric display and how accurately 3 dimensional structures can be quantitated from these displays.

  20. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  1. Field Geology/Processes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert

    1996-01-01

    The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.

  2. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  3. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  4. Opportunity's Surroundings on Sol 1818 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view.

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  5. Phoenix Lander on Mars with Surrounding Terrain, Vertical Projection

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This view is a vertical projection that combines more than 500 exposures taken by the Surface Stereo Imager camera on NASA's Mars Phoenix Lander and projects them as if looking down from above.

    The black circle on the spacecraft is where the camera itself is mounted on the lander, out of view in images taken by the camera. North is toward the top of the image. The height of the lander's meteorology mast, extending toward the southwest, appears exaggerated because that mast is taller than the camera mast.

    This view in approximately true color covers an area about 30 meters by 30 meters (about 100 feet by 100 feet). The landing site is at 68.22 degrees north latitude, 234.25 degrees east longitude on Mars.

    The ground surface around the lander has polygonal patterning similar to patterns in permafrost areas on Earth.

    This view comprises more than 100 different Stereo Surface Imager pointings, with images taken through three different filters at each pointing. The images were taken throughout the period from the 13th Martian day, or sol, after landing to the 47th sol (June 5 through July 12, 2008). The lander's Robotic Arm is cut off in this mosaic view because component images were taken when the arm was out of the frame.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  6. Offshore remote sensing of the ocean by stereo vision systems

    NASA Astrophysics Data System (ADS)

    Gallego, Guillermo; Shih, Ping-Chang; Benetazzo, Alvise; Yezzi, Anthony; Fedele, Francesco

    2014-05-01

    In recent years, remote sensing imaging systems for the measurement of oceanic sea states have attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms, which focus on sea states with wavelengths in the range of 0.01 m to 1 m. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting oberved waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss furure lines of research to improve their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters or the processing time that it takes to retrieve ocean wave measurements from the stereo videos, which are very large datasets that need to be processed efficiently to be of practical usage. Multiresolution and short-time approaches would improve efficiency and scalability of the techniques so that wave displacements are obtained in feasible times.

  7. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  8. The PRo3D View Planner - interactive simulation of Mars rover camera views to optimise capturing parameters

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Ortner, Thomas; Hesina, Gerd; Barnes, Robert; Gupta, Sanjeev; Paar, Gerhard

    2017-04-01

    High resolution Digital Terrain Models (DTM) and Digital Outcrop Models (DOM) are highly useful for geological analysis and mission planning in planetary rover missions. PRo3D, developed as part of the EU-FP7 PRoViDE project, is a 3D viewer in which orbital DTMs and DOMs derived from rover stereo imagery can be rendered in a virtual environment for exploration and analysis. It allows fluent navigation over planetary surface models and provides a variety of measurement and annotation tools to complete an extensive geological interpretation. A key aspect of the image collection during planetary rover missions is determining the optimal viewing positions of rover instruments from different positions ('wide baseline stereo'). For the collection of high quality panoramas and stereo imagery the visibility of regions of interest from those positions, and the amount of common features shared by each stereo-pair, or image bundle is crucial. The creation of a highly accurate and reliable 3D surface, in the form of an Ordered Point Cloud (OPC), of the planetary surface, with a low rate of error and a minimum of artefacts, is greatly enhanced by using images that share a high amount of features and a sufficient overlap for wide baseline stereo or target selection. To support users in the selection of adequate viewpoints an interactive View Planner was integrated into PRo3D. The users choose from a set of different rovers and their respective instruments. PRo3D supports for instance the PanCam instrument of ESA's ExoMars 2020 rover mission or the Mastcam-Z camera of NASA's Mars2020 mission. The View Planner uses a DTM obtained from orbiter imagery, which can also be complemented with rover-derived DOMs as the mission progresses. The selected rover is placed onto a position on the terrain - interactively or using the current rover pose as known from the mission. The rover's base polygon and its local coordinate axes, and the chosen instrument's up- and forward vectors are visualised. The parameters of the instrument's pan and tilt unit (PTU) can be altered via the user interface, or alternatively calculated by selecting a target point on the visualised DTM. In the 3D view, the visible region of the planetary surface, resulting from these settings and the camera field-of-view is visualised by a highlighted region with a red border, representing the instruments footprint. The camera view is simulated and rendered in a separate window and PTU parameters can be interactively adjusted, allowing viewpoints, directions, and the expected image to be visualised in real-time in order to allow users the fine-tuning of these settings. In this way, ideal viewpoints and PTU settings for various rover models and instruments can efficiently be defined, resulting in an optimum imagery of the regions of interest.

  9. Deep 'Stone Soup' Trenching by Phoenix (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Digging by NASA's Phoenix Mars Lander on Aug. 23, 2008, during the 88th sol (Martian day) since landing, reached a depth about three times greater than in any trench Phoenix has excavated. The deep trench, informally called 'Stone Soup' is at the borderline between two of the polygon-shaped hummocks that characterize the arctic plain where Phoenix landed.

    Stone Soup is in the center foreground of this stereo view, which appears three dimensional when seen through red-blue glasses. The view combines left-eye and right-eye images taken by the lander's Surface Stereo Imager on Sol 88 after the day's digging. The trench is about 25 centimeters (10 inches) wide and about 18 centimeters (7 inches) deep.

    When digging trenches near polygon centers, Phoenix has hit a layer of icy soil, as hard as concrete, about 5 centimeters or 2 inches beneath the ground surface. In the Stone Soup trench at a polygon margin, the digging has not yet hit an icy layer like that.

    Stone Soup is toward the left, or west, end of the robotic arm's work area on the north side of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration.

    PubMed

    Rohl, Sebastian; Bodenstedt, Sebastian; Suwelack, Stefan; Dillmann, Rudiger; Speidel, Stefanie; Kenngott, Hannes; Muller-Stich, Beat P

    2012-03-01

    In laparoscopic surgery, soft tissue deformations substantially change the surgical site, thus impeding the use of preoperative planning during intraoperative navigation. Extracting depth information from endoscopic images and building a surface model of the surgical field-of-view is one way to represent this constantly deforming environment. The information can then be used for intraoperative registration. Stereo reconstruction is a typical problem within computer vision. However, most of the available methods do not fulfill the specific requirements in a minimally invasive setting such as the need of real-time performance, the problem of view-dependent specular reflections and large curved areas with partly homogeneous or periodic textures and occlusions. In this paper, the authors present an approach toward intraoperative surface reconstruction based on stereo endoscopic images. The authors describe our answer to this problem through correspondence analysis, disparity correction and refinement, 3D reconstruction, point cloud smoothing and meshing. Real-time performance is achieved by implementing the algorithms on the gpu. The authors also present a new hybrid cpu-gpu algorithm that unifies the advantages of the cpu and the gpu version. In a comprehensive evaluation using in vivo data, in silico data from the literature and virtual data from a newly developed simulation environment, the cpu, the gpu, and the hybrid cpu-gpu versions of the surface reconstruction are compared to a cpu and a gpu algorithm from the literature. The recommended approach toward intraoperative surface reconstruction can be conducted in real-time depending on the image resolution (20 fps for the gpu and 14fps for the hybrid cpu-gpu version on resolution of 640 × 480). It is robust to homogeneous regions without texture, large image changes, noise or errors from camera calibration, and it reconstructs the surface down to sub millimeter accuracy. In all the experiments within the simulation environment, the mean distance to ground truth data is between 0.05 and 0.6 mm for the hybrid cpu-gpu version. The hybrid cpu-gpu algorithm shows a much more superior performance than its cpu and gpu counterpart (mean distance reduction 26% and 45%, respectively, for the experiments in the simulation environment). The recommended approach for surface reconstruction is fast, robust, and accurate. It can represent changes in the intraoperative environment and can be used to adapt a preoperative model within the surgical site by registration of these two models.

  11. Effects of Orbit and Pointing Geometry of a Spaceborne Formation for Monostatic-Bistatic Radargrammetry on Terrain Elevation Measurement Accuracy

    PubMed Central

    Renga, Alfredo; Moccia, Antonio

    2009-01-01

    During the last decade a methodology for the reconstruction of surface relief by Synthetic Aperture Radar (SAR) measurements – SAR interferometry – has become a standard. Different techniques developed before, such as stereo-radargrammetry, have been experienced from space only in very limiting geometries and time series, and, hence, branded as less accurate. However, novel formation flying configurations achievable by modern spacecraft allow fulfillment of SAR missions able to produce pairs of monostatic-bistatic images gathered simultaneously, with programmed looking angles. Hence it is possible to achieve large antenna separations, adequate for exploiting to the utmost the stereoscopic effect, and to make negligible time decorrelation, a strong liming factor for repeat-pass stereo-radargrammetric techniques. This paper reports on design of a monostatic-bistatic mission, in terms of orbit and pointing geometry, and taking into account present generation SAR and technology for accurate relative navigation. Performances of different methods for monostatic-bistatic stereo-radargrammetry are then evaluated, showing the possibility to determine the local surface relief with a metric accuracy over a wide range of Earth latitudes. PMID:22389594

  12. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.

  13. Generation of High Resolution Global DSM from ALOS PRISM

    NASA Astrophysics Data System (ADS)

    Takaku, J.; Tadono, T.; Tsutsui, K.

    2014-04-01

    Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried on the Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. The sensor consists of three independent panchromatic radiometers for viewing forward, nadir, and backward in 2.5 m ground resolution producing a triplet stereoscopic image along its track. The sensor had observed huge amount of stereo images all over the world during the mission life of the satellite from 2006 through 2011. We have semi-automatically processed Digital Surface Model (DSM) data with the image archives in some limited areas. The height accuracy of the dataset was estimated at less than 5 m (rms) from the evaluation with ground control points (GCPs) or reference DSMs derived from the Light Detection and Ranging (LiDAR). Then, we decided to process the global DSM datasets from all available archives of PRISM stereo images by the end of March 2016. This paper briefly reports on the latest processing algorithms for the global DSM datasets as well as their preliminary results on some test sites. The accuracies and error characteristics of datasets are analyzed and discussed on various fields by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data and Shuttle Radar Topography Mission (SRTM) data, as well as the GCPs and the reference airborne LiDAR/DSM.

  14. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  15. Stereographic observations from geosynchronous satellites - An important new tool for the atmospheric sciences

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    1981-01-01

    Observations of cloud geometry using scan-synchronized stereo geostationary satellites having images with horizontal spatial resolution of approximately 0.5 km, and temporal resolution of up to 3 min are presented. The stereo does not require a cloud with known emissivity to be in equilibrium with an atmosphere with a known vertical temperature profile. It is shown that absolute accuracies of about 0.5 km are possible. Qualitative and quantitative representations of atmospheric dynamics were shown by remapping, display, and stereo image analysis on an interactive computer/imaging system. Applications of stereo observations include: (1) cloud top height contours of severe thunderstorms and hurricanes, (2) cloud top and base height estimates for cloud-wind height assignment, (3) cloud growth measurements for severe thunderstorm over-shooting towers, (4) atmospheric temperature from stereo heights and infrared cloud top temperatures, and (5) cloud emissivity estimation. Recommendations are given for future improvements in stereo observations, including a third GOES satellite, operational scan synchronization of all GOES satellites and better resolution sensors.

  16. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  17. Stanford automatic photogrammetry research

    NASA Technical Reports Server (NTRS)

    Quam, L. H.; Hannah, M. J.

    1974-01-01

    A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.

  18. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  19. Characterization of ASTER GDEM Elevation Data over Vegetated Area Compared with Lidar Data

    NASA Technical Reports Server (NTRS)

    Ni, Wenjian; Sun, Guoqing; Ranson, Kenneth J.

    2013-01-01

    Current researches based on areal or spaceborne stereo images with very high resolutions (less than 1 meter) have demonstrated that it is possible to derive vegetation height from stereo images. The second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM) is a state-of-the-art global elevation data-set developed by stereo images. However, the resolution of ASTER stereo images (15 meters) is much coarser than areal stereo images, and the ASTER GDEM is compiled products from stereo images acquired over 10 years. The forest disturbances as well as forest growth are inevitable in 10 years time span. In this study, the features of ASTER GDEM over vegetated areas under both flat and mountainous conditions were investigated by comparisons with lidar data. The factors possibly affecting the extraction of vegetation canopy height considered include (1) co-registration of DEMs; (2) spatial resolution of digital elevation models (DEMs); (3) spatial vegetation structure; and (4) terrain slope. The results show that accurate co-registration between ASTER GDEM and the National Elevation Dataset (NED) is necessary over mountainous areas. The correlation between ASTER GDEM minus NED and vegetation canopy height is improved from 0.328 to 0.43 by degrading resolutions from 1 arc-second to 5 arc-seconds and further improved to 0.6 if only homogenous vegetated areas were considered.

  20. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    NASA Astrophysics Data System (ADS)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  1. Sampling Martian Soil (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2

    Scientists were using the Moessbauer spectrometer on NASA's Mars Exploration Rover Spirit when something unexpected happened. The instrument's contact ring had been placed onto the ground as a reference point for placement of another instrument, the alpha particle X-ray spectrometer, for analyzing the soil. After Spirit removed the Moessbauer from the target, the rover's microscopic imager revealed a gap in the imprint left behind in the soil. The gap, about a centimeter wide (less than half an inch), is visible on the left side of this stereo view. Scientists concluded that a small chunk of soil probably adhered to the contact ring on the front surface of the Moessbauer. Before anyone saw that soil may have adhered to the Moessbauer, that instrument was placed to analyze martian dust collected by a magnet on the rover. The team plans to take images to see if any soil is still attached to the Moessbauer. Spirit took these images on the rover's 240th martian day, or sol (Sept. 4, 2004).

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  2. MISR Stereo Imaging Distinguishes Smoke from Cloud

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These views of western Alaska were acquired by MISR on June 25, 2000 during Terra orbit 2775. The images cover an area of about 150 kilometers x 225 kilometers, and have been oriented with north to the left. The left image is from the vertical-viewing (nadir) camera, whereas the right image is a stereo 'anaglyph' that combines data from the forward-viewing 45-degree and 60-degree cameras. This image appears three-dimensional when viewed through red/blue glasses with the red filter over the left eye. It may help to darken the room lights when viewing the image on a computer screen.

    The Yukon River is seen wending its way from upper left to lower right. A forest fire in the Kaiyuh Mountains produced the long smoke plume that originates below and to the right of image center. In the nadir view, the high cirrus clouds at the top of the image and the smoke plume are similar in appearance, and the lack of vertical information makes them hard to differentiate. Viewing the righthand image with stereo glasses, on the other hand, demonstrates that the scene consists of several vertically-stratified layers, including the surface terrain, the smoke, some scattered cumulus clouds, and streaks of high, thin cirrus. This added dimensionality is one of the ways MISR data helps scientists identify and classify various components of terrestrial scenes.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  3. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2014-01-01

    Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845

  4. Tri-stereo Pleiades images-derived digital surface models for tectonic geomorphology studies

    NASA Astrophysics Data System (ADS)

    Ferry, Matthieu; Le Roux-Mallouf, Romain; Ritz, Jean-François; Berthet, Théo; Peyret, Michel; Vernant, Philippe; Maréchal, Anaïs; Cattin, Rodolphe; Mazzotti, Stéphane; Poujol, Antoine

    2014-05-01

    Very high resolution digital elevation models are a key component of modern quantitative geomorphology. In parallel to high-precision but time-consuming kinematic GPS and/or total station surveys and dense coverage but expensive LiDAR campaigns, we explore the usability of affordable, flexible, wide coverage digital surface models (DSMs) derived from Pleiades tri-stereo optical images. We present two different approaches to extract DSM from a triplet of images. The first relies on the photogrammetric extraction of 3 DSMs from the 3 possible stereo couples and subsequent merge based on the best correlation score. The second takes advantage of simultaneous correlation over the 3 images to derive a point cloud. We further extract DSM from panchromatic 0.5 m resolution images and multispectral 2 m resolution images to test for correlation and noise and determine optimal correlation window size and achievable resolution. Georeferencing is also assessed by comparing raw coordinates derived from Pleiades Rational Polynomial Coefficients to ground control points. Primary images appear to be referenced within ~15 m over flat areas where parallax is minimal while derived DSMs and associated orthorectified images show a much improved referencing within ~5 m of GCPs. In order to assess the adequacy of Pleiades DSMs for tectonic geomorphology, we present examples from case studies along the Trougout normal fault (Morocco), the Hovd strike-slip fault (Mongolia), the Denali strike-slip fault (USA and Canada) and the Main Frontal Thrust (Bhutan). In addition to proposing a variety of tectonic contexts, these examples cover a wide range of climatic conditions (semi-arid, arctic and tropical), vegetation covers (bare earth, sparse Mediterranean, homogeneous arctic pine, varied tropical forest), lithological natures and related erosion rates. The capacity of derived DSMs is demonstrated to characterize geomorphic markers of active deformation such as marine and alluvial terraces, stream gullies, alluvial fans and fluvio-glacial deposits in terms of vertical (from DSMs) and horizontal (from orthorectified optical images) offsets. Values extracted from Pleiades DSMs compare well to field measurements in terms of relief and slope, which suggests effort and resources necessary for field topography could be significantly reduced, especially in poorly accessible areas.

  5. Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1993-01-01

    A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

  6. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing information employed to generate a time series of terrain relief with corrected ORIs back to 1977. Web-GIS using OGC protocols will be employed to allow visual exploration of changes to the surface. Data mining processing chains are being developed to search for changes in the Martian surface from 1971-2015 and the output of this data mining will be compared against the results from citizen scientists’ measurements in a specialized Zooniverse implementation. The final co-registered data sets will be distributed through both European and US channels in a manner to be decided towards the end of the project. Acknowledgements: The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement no. 607379.

  7. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  8. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The center of the view is toward the south-southwest.

    The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau.

    Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  9. New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007).

    The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  10. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.'

    The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  11. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right.

    The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate.

    In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Eros in Stereo

    NASA Image and Video Library

    2000-05-07

    Stereo imaging, an important tool on NASA NEAR Shoemaker for geologic analysis of Eros, provides three-dimensional information on the asteroid landforms and structures. 3D glasses are necessary to view this image.

  13. Surface topography of 1€ coin measured by stereo-PIXE

    NASA Astrophysics Data System (ADS)

    Gholami-Hatam, E.; Lamehi-Rachti, M.; Vavpetič, P.; Grlj, N.; Pelicon, P.

    2013-07-01

    We demonstrate the stereo-PIXE method by measurement of surface topography of the relief details on 1€ coin. Two X-ray elemental maps were simultaneously recorded by two X-ray detectors positioned at the left and the right side of the proton microbeam. The asymmetry of the yields in the pixels of the two X-ray maps occurs due to different photon attenuation on the exit travel path of the characteristic X-rays from the point of emission through the sample into the X-ray detectors. In order to calibrate the inclination angle with respect to the X-ray asymmetry, a flat inclined surface model was at first applied for the sample in which the matrix composition and the depth elemental concentration profile is known. After that, the yield asymmetry in each image pixel was transferred into corresponding local inclination angle using calculated dependence of the asymmetry on the surface inclination. Finally, the quantitative topography profile was revealed by integrating the local inclination angle over the lateral displacement of the probing beam.

  14. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  15. Stereo View of Phoenix Test Sample Site

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This anaglyph image, acquired by NASA's Phoenix Lander's Surface Stereo Imager on Sol 7, the seventh day of the mission (June 1, 2008), shows a stereoscopic 3D view of the so-called 'Knave of Hearts' first-dig test area to the north of the lander. The Robotic Arm's scraping blade left a small horizontal depression above where the sample was taken.

    Scientists speculate that white material in the depression left by the dig could represent ice or salts that precipitated into the soil. This material is likely the same white material observed in the sample in the Robotic Arm's scoop.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV

    NASA Astrophysics Data System (ADS)

    Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.

    2011-03-01

    Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.

  17. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  18. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  19. The PanCam instrument on the 2018 Exomars rover: Science Implementation Strategy and Integrated Surface Operations Concept

    NASA Astrophysics Data System (ADS)

    Schmitz, Nicole; Jaumann, Ralf; Coates, Andrew; Griffiths, Andrew; Hauber, Ernst; Trauthan, Frank; Paar, Gerhard; Barnes, Dave; Bauer, Arnold; Cousins, Claire

    2010-05-01

    Geologic context as a combination of orbital imaging and surface vision, including range, resolution, stereo, and multispectral imaging, is commonly regarded as basic requirement for remote robotic geology and forms the first tier of any multi-instrument strategy for investigating and eventually understanding the geology of a region from a robotic platform. Missions with objectives beyond a pure geologic survey, e.g. exobiology objectives, require goal-oriented operational procedures, where the iterative process of scientific observation, hypothesis, testing, and synthesis, performed via a sol-by-sol data exchange with a remote robot, is supported by a powerful vision system. Beyond allowing a thorough geological mapping of the surface (soil, rocks and outcrops) in 3D, using wide angle stereo imagery, such a system needs to be able to provide detailed visual information on targets of interest in high resolution, thereby enabling the selection of science targets and samples for further analysis with a specialized in-situ instrument suite. Surface vision for ESA's upcoming ExoMars rover will come from a dedicated Panoramic Camera System (PanCam). As integral part of the Pasteur payload package, the PanCam is designed to support the search for evidence of biological processes by obtaining wide angle multispectral stereoscopic panoramic images and high resolution RGB images from the mast of the rover [1]. The camera system will consist of two identical wide-angle cameras (WACs), which are arranged on a common pan-tilt mechanism, with a fixed stereo base length of 50 cm. The WACs are being complemented by a High Resolution Camera (HRC), mounted between the WACs, which allows a magnification of selected targets by a factor of ~8 with respect to the wide-angle optics. The high-resolution images together with the multispectral and stereo capabilities of the camera will be of unprecedented quality for the identification of water-related surface features (such as sedimentary rocks) and form one key to a successful implementation of ESA's multi-level strategy for the ExoMars Reference Surface Mission. A dedicated PanCam Science Implementation Strategy is under development, which connects the PanCam science objectives and needs of the ExoMars Surface Mission with the required investigations, planned measurement approach and sequence, and connected mission requirements. First step of this strategy is obtaining geological context to enable the decision where to send the rover. PanCam (in combination with Wisdom) will be used to obtain ground truth by a thorough geomorphologic mapping of the ExoMars rover's surroundings in near and far range in the form of (1) RGB or monochromatic full (i.e. 360°) or partial stereo panoramas for morphologic and textural information and stereo ranging, (2) mosaics or single images with partly or full multispectral coverage to assess the mineralogy of surface materials as well as their weathering state and possible past or present alteration processes and (3) small-scale high-resolution information on targets/features of interest, and distant or inaccessible sites. This general survey phase will lead to the identification of surface features like outcrops, ridges and troughs and the characterization of different rock and surface units based on their morphology, distribution, and spectral and physical properties. Evidence of water-bearing minerals, water-altered rocks or even water-lain sediments seen in the large-scale wide angle images will then allow for preselecting those targets/features considered relevant for detailed analysis and definition of their geologic context. Detailed characterization and, subsequently, selection of those preselected targets/features for further analysis will then be enabled by color high-resolution imagery, followed by the next tier of contact instruments to enable a decision on whether or not to acquire samples for further analysis. During the following drill/analysis phase, PanCam's High Resolution Camera will characterize the sample in the sample tray and observe the sample discharge into the Core Sample Transfer Mechanism. Key parts of this science strategy have been tested under laboratory conditions in two geology blind tests [2] and during two field test campaigns in Svalbard, using simulated mission conditions, an ExoMars representative Payload (ExoMars and MSL instrument breadboards), and Mars analog settings [3, 4]. The experiences gained are being translated into operational sequences, and, together with the science implementation strategy, form a first version of a PanCam Surface Operations plan. References: [1] Griffiths, A.D. et al. (2006) International Journal of Astrobiology 5 (3): 269-275, doi:10.1017/ S1473550406003387. [2] Pullan, D. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-514. [3] Schmitz, N. et al. (2009) Geophysical Research Abstracts, Vol. 11, EGU2009-10621-2. [4] Cousins, C. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-813.

  20. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  1. Analysis and design of stereoscopic display in stereo television endoscope system

    NASA Astrophysics Data System (ADS)

    Feng, Dawei

    2008-12-01

    Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.

  2. Radargrammetric DSM generation in mountainous areas through adaptive-window least squares matching constrained by enhanced epipolar geometry

    NASA Astrophysics Data System (ADS)

    Dong, Yuting; Zhang, Lu; Balz, Timo; Luo, Heng; Liao, Mingsheng

    2018-03-01

    Radargrammetry is a powerful tool to construct digital surface models (DSMs) especially in heavily vegetated and mountainous areas where SAR interferometry (InSAR) technology suffers from decorrelation problems. In radargrammetry, the most challenging step is to produce an accurate disparity map through massive image matching, from which terrain height information can be derived using a rigorous sensor orientation model. However, precise stereoscopic SAR (StereoSAR) image matching is a very difficult task in mountainous areas due to the presence of speckle noise and dissimilar geometric/radiometric distortions. In this article, an adaptive-window least squares matching (AW-LSM) approach with an enhanced epipolar geometric constraint is proposed to robustly identify homologous points after compensation for radiometric discrepancies and geometric distortions. The matching procedure consists of two stages. In the first stage, the right image is re-projected into the left image space to generate epipolar images using rigorous imaging geometries enhanced with elevation information extracted from the prior DEM data e.g. SRTM DEM instead of the mean height of the mapped area. Consequently, the dissimilarities in geometric distortions between the left and right images are largely reduced, and the residual disparity corresponds to the height difference between true ground surface and the prior DEM. In the second stage, massive per-pixel matching between StereoSAR epipolar images identifies the residual disparity. To ensure the reliability and accuracy of the matching results, we develop an iterative matching scheme in which the classic cross correlation matching is used to obtain initial results, followed by the least squares matching (LSM) to refine the matching results. An adaptively resizing search window strategy is adopted during the dense matching step to help find right matching points. The feasibility and effectiveness of the proposed approach is demonstrated using Stripmap and Spotlight mode TerraSAR-X stereo data pairs covering Mount Song in central China. Experimental results show that the proposed method can provide a robust and effective matching tool for radargrammetry in mountainous areas.

  3. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  4. Topographic map of the western region of Dao Vallis in Hellas Planitia, Mars; MTM 500k -40/082E OMKT

    USGS Publications Warehouse

    Rosiek, Mark R.; Redding, Bonnie L.; Galuszka, Donna M.

    2006-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. Contours were derived from a digital terrain model (DTM) compiled on a digital photogrammetric workstation using Viking Orbiter stereo image pairs with orientation parameters derived from an analytic aerotriangulation. The image base for this map employs Viking Orbiter images from orbits 406 and 363. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models.

  5. Mastcam Stereo Analysis and Mosaics (MSAM)

    NASA Astrophysics Data System (ADS)

    Deen, R. G.; Maki, J. N.; Algermissen, S. S.; Abarca, H. E.; Ruoff, N. A.

    2017-06-01

    Describes a new PDART task that will generate stereo analysis products (XYZ, slope, etc.), terrain meshes, and mosaics (stereo, ortho, and Mast/Nav combos) for all MSL Mastcam images and deliver the results to PDS.

  6. Lunar geodesy and cartography: a new era

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Smith, David; Robinson, Mark; Zuber, Maria T.; Neumann, Gregory; Danton, Jacob; Oberst, Juergen; Archinal, Brent; Glaeser, Philipp

    The Lunar Reconnaissance Orbiter (LRO) ushers in a new era in precision lunar geodesy and cartography. LRO was launched in June, 2009, completed its Commissioning Phase in Septem-ber 2009 and is now in its Primary Mission Phase on its way to collecting high precision, global topographic and imaging data. Aboard LRO are the Lunar Orbiter Laser Altimeter (LOLA -Smith, et al., 2009) and the Lunar Reconnaissance Orbiter Camera (LROC -Robinson, et al., ). LOLA is a derivative of the successful MOLA at Mars that produced the global reference surface being used for all precision cartographic products. LOLA produces 5 altimetry spots having footprints of 5 m at a frequency of 28 Hz, significantly bettering MOLA that produced 1 spot having a footprint of 150 m at a frequency of 10 Hz. LROC has twin narrow angle cameras having pixel resolutions of 0.5 meters from a 50 km orbit and a wide-angle camera having a pixel resolution of 75 m and in up to 7 color bands. One of the two NACs looks to the right of nadir and the other looks to the left with a few hundred pixel overlap in the nadir direction. LOLA is mounted on the LRO spacecraft to look nadir, in the overlap region of the NACs. The LRO spacecraft has the ability to look nadir and build up global coverage as well as looking off-nadir to provide stereo coverage and fill in data gaps. The LROC wide-angle camera builds up global stereo coverage naturally from its large field-of-view overlap from orbit to orbit during nadir viewing. To date, the LROC WAC has already produced global stereo coverage of the lunar surface. This report focuses on the registration of LOLA altimetry to the LROC NAC images. LOLA has a dynamic range of tens of km while producing elevation data at sub-meter precision. LOLA also has good return in off-nadir attitudes. Over the LRO mission, multiple LOLA tracks will be in each of the NAC images at the lunar equator and even more tracks in the NAC images nearer the poles. The registration of LOLA altimetry to NAC images is aided by the 5 spots showing regional and local slopes, along and cross-track, that are easily correlated visually to features within the images. Once can precisely register each of the 5 LOLA spots to specific pixels in LROC images of distinct features such as craters and boulders. This can be performed routinely for features at the 100 m level and larger. However, even features at the several m level can also be registered if a single LOLA spots probes the depth of a small crater while the other 4 spots are on the surrounding surface or one spot returns from the top of a small boulder seen by NAC. The automatic registration of LOLA tracks with NAC stereo digital terrain models should provide for even higher accuracy. Also the LOLA pulse spread of the returned signal, which is sensitive to slopes and roughness, is an additional source of information to help match the LOLA tracks to the images As the global coverage builds, LOLA will provide absolute coordinates in latitude, longitude and radius of surface features with accuracy at the meter level or better. The NAC images will then be reg-istered to the LOLA reference surface in the production of precision, controlled photomosaics, having spatial resolutions as good as 0.5 m/pixel. For hundreds of strategic sites viewed in stereo, even higher precision and more complete surface coverage is possible for the produc-tion of digital terrain models and mosaics. LRO, with LOLA and LROC, will improve the relative and absolute accuracy of geodesy and cartography by orders of magnitude, ushering in a new era for lunar geodesy and cartography. Robinson, M., et al., Space Sci. Rev., DOI 10.1007/s11214-010-9634-2, Date: 2010-02-23, in press. Smith, D., et al., Space Sci. Rev., DOI 10.1007/s11214-009-9512-y, published online 16 May 2009.

  7. Estimation of surface curvature from full-field shape data using principal component analysis

    NASA Astrophysics Data System (ADS)

    Sharma, Sameer; Vinuchakravarthy, S.; Subramanian, S. J.

    2017-01-01

    Three-dimensional digital image correlation (3D-DIC) is a popular image-based experimental technique for estimating surface shape, displacements and strains of deforming objects. In this technique, a calibrated stereo rig is used to obtain and stereo-match pairs of images of the object of interest from which the shapes of the imaged surface are then computed using the calibration parameters of the rig. Displacements are obtained by performing an additional temporal correlation of the shapes obtained at various stages of deformation and strains by smoothing and numerically differentiating the displacement data. Since strains are of primary importance in solid mechanics, significant efforts have been put into computation of strains from the measured displacement fields; however, much less attention has been paid to date to computation of curvature from the measured 3D surfaces. In this work, we address this gap by proposing a new method of computing curvature from full-field shape measurements using principal component analysis (PCA) along the lines of a similar work recently proposed to measure strains (Grama and Subramanian 2014 Exp. Mech. 54 913-33). PCA is a multivariate analysis tool that is widely used to reveal relationships between a large number of variables, reduce dimensionality and achieve significant denoising. This technique is applied here to identify dominant principal components in the shape fields measured by 3D-DIC and these principal components are then differentiated systematically to obtain the first and second fundamental forms used in the curvature calculation. The proposed method is first verified using synthetically generated noisy surfaces and then validated experimentally on some real world objects with known ground-truth curvatures.

  8. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  9. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.

  10. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  11. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  12. Wild 2 Features

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    These images taken by NASA's Stardust spacecraft highlight the diverse features that make up the surface of comet Wild 2. Side A (see Figure 1) shows a variety of small pinnacles and mesas seen on the limb of the comet. Side B (see Figure 1) shows the location of a 2-kilometer (1.2-mile) series of aligned scarps, or cliffs, that are best seen in the stereo images.

  13. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  14. A comparative interregional analysis of selected data from LANDSAT-1 and EREP for the inventory and monitoring of natural ecosystems

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1975-01-01

    Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.

  15. a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization

    NASA Astrophysics Data System (ADS)

    Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.

    2017-07-01

    Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.

  16. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  18. Performance Evaluation of Dsm Extraction from ZY-3 Three-Line Arrays Imagery

    NASA Astrophysics Data System (ADS)

    Xue, Y.; Xie, W.; Du, Q.; Sang, H.

    2015-08-01

    ZiYuan-3 (ZY-3), launched in January 09, 2012, is China's first civilian high-resolution stereo mapping satellite. ZY-3 is equipped with three-line scanners (nadir, backward and forward) for stereo mapping, the resolutions of the panchromatic (PAN) stereo mapping images are 2.1-m at nadir looking and 3.6-m at tilt angles of ±22° forward and backward looking, respectively. The stereo base-height ratio is 0.85-0.95. Compared with stereo mapping from two views images, three-line arrays images of ZY-3 can be used for DSM generation taking advantage of one more view than conventional photogrammetric methods. It would enrich the information for image matching and enhance the accuracy of DSM generated. The primary result of positioning accuracy of ZY-3 images has been reported, while before the massive mapping applications of utilizing ZY-3 images for DSM generation, the performance evaluation of DSM extraction from three-line arrays imagery of ZY-3 has significant meaning for the routine mapping applications. The goal of this research is to clarify the mapping performance of ZY-3 three-line arrays scanners on china's first civilian high-resolution stereo mapping satellite of ZY-3 through the accuracy evaluation of DSM generation. The comparison of DSM product in different topographic areas generated with three views images with different two views combination images of ZY-3 would be presented. Besides the comparison within different topographic study area, the accuracy deviation of the DSM products with different grid size including 25-m, 10-m and 5-m is delineated in order to clarify the impact of grid size on accuracy evaluation.

  19. Mars Science Laboratory Mission Curiosity Rover Stereo

    NASA Image and Video Library

    2011-07-22

    This stereo image of NASA Mars Science Laboratory Curiosity Rovert was taken May 26, 2011, in Spacecraft Assembly Facility at NASA Jet Propulsion Laboratory in Pasadena, Calif. 3D glasses are necessary to view this image.

  20. Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Ji, S.; Zhang, C.; Qin, Z.

    2018-05-01

    Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.

  1. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  2. Identification of modes of fracture in a 2618-T6 aluminum alloy using stereophotogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salas Zamarripa, A., E-mail: a.salaszamarripa@gmail.com; Pinna, C.; Brown, M.W.

    2011-12-15

    The identification and the development of a quantification technique of the modes of fracture in fatigue fracture surfaces of a 2618-T6 aluminum alloy were developed during this research. Fatigue tests at room and high temperature (230 Degree-Sign C) were carried out to be able to compare the microscopic fractographic features developed by this material under these testing conditions. The overall observations by scanning electron microscopy (SEM) of the fracture surfaces showed a mixture of transgranular and ductile intergranular fracture. The ductile intergranular fracture contribution appears to be more significant at room temperature than at 230 Degree-Sign C. A quantitative methodologymore » was developed to identify and to measure the contribution of these microscopic fractographic features. The technique consisted of a combination of stereophotogrammetry and image analysis. Stereo-pairs were randomly taken along the crack paths and were then analyzed using the profile module of MeX software. The analysis involved the 3-D surface reconstruction, the trace of primary profile lines in both vertical and horizontal directions within the stereo-pair area, the measurements of the contribution of the modes of fracture in each profile, and finally, the calculation of the average contribution in each stereo-pair. The technique results confirmed a higher contribution of ductile intergranular fracture at room temperature than at 230 Degree-Sign C. Moreover, there was no indication of a direct relationship between this contribution and the strain amplitudes range applied during the fatigue testing. - Highlights: Black-Right-Pointing-Pointer Stereophotogrammetry and image analysis as a measuring tool of modes of fracture in fatigue fracture surfaces. Black-Right-Pointing-Pointer A mixture of ductile intergranular and transgranular fracture was identified at room temperature and 230 Degree-Sign C testing. Black-Right-Pointing-Pointer Development of a quantitative methodology to obtain the percentage of modes of fracture within the fracture surface.« less

  3. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  4. Left Panorama of Spirit's Landing Site

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Left Panorama of Spirit's Landing Site

    This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.

  5. SIMBIO-SYS for BepiColombo: status and issues.

    NASA Astrophysics Data System (ADS)

    Flamini, E.; Capaccioni, F.; Cremonese, G.; Palumbo, P.; Formaro, R.; Mugnuolo, R.; Debei, S.; Ficai Veltroni, I.; Dami, M.; Tommasi, L.; SIMBIO-SYS Team

    The SIMBIO-SYS (Spectrometer and Imaging for MPO BepiColombo Integrated Observatory SYStem) is a complex instrument suite part of the scientific payload of the Mercury Planetary Orbiter for the BepiColombo mission, the last of the cornerstone missions of the European Space Agency (ESA) Horizon+ science program. The BepiColombo mission is compose by two scientific satellites on, Mercury Magnetic Orbiter-MMO, realized by the Japanese Space Agency JAXA, devoted to the study of the planet environment and the other, the Mercury Planetary Orbiter realized by ESA, devoted to the detailed study of the Hermean surface and interior. The SIMBIOSYS instrument will provide all the science imaging capability of the Bepicolombo MPO spacecraft. It consists of three channels: the STereo imaging Channel (STC), with broad spectral band in the 400-950 nm range and medium spatial resolution (up to 50 m/px), that will provide Digital Terrain Model of the entire surface of the planet with an accuracy better than 80 m; the High Resolution Imaging Channel HRIC), with broad spectral bands in the 400-900 nm range and high spatial resolution (up to 5 m/px), that will provide high resolution images of about 20% of the surface, and the Visible and near-Infrared Hyperspectral Imaging channel (VIHI), with high spectral resolution (up to 6 nm) in the 400-2000 nm range and spatial resolution up to 100 m/px, it will provide the global covergae at 400 m/px with the spectral information. SIMBIO-SYS will provide unprecedented high-resolution images, the Digital Terrain Model of the entire surface, and the surface composition in wide spectral range, at resolutions and coverage higher than the MESSENGER mission with a full co-alignememt of the three channels. The main scientific objectives can be summarized as follows: Definition of the impact flux in the inner Solar System: based on the impact crater population records Understanding of the accretional model of an end member of the Solar System: based on the type and distribution of mineral species Reconstruction of the surface geology and stratigraphic history: based on the combination of stereo and high- resolution imaging along with compositional information coming from the spectrometer Relative surface age by impact craters population density and distribution: based on the global imaging including the high-resolution mode Surface degradation processes and global resurfacing: derived from the erosional status of the impact crater and ejecta Identification of volcanic landforms and style: using the morphological and compositional information Crustal dynamics and mechanical properties of the lithosphere: based on the identification and classification of tectonic structures from visible images and detailed DTM Surface composition and crustal differentiation: based on the identification and distribution of mineral species as seen by the NIR hyperspectral imager Soil maturity and alteration processes: based on the measure of the spectral slope derived by the hyperspectral imager and the colour capabilities of the stereo camera Determination of moment of inertia of the planet: the high-resolution imaging channel as landmark pairs of surface features that can be observed on the periside as support for the libration experiment Surface-Atmosphere interaction processes and origin of the exosphere: knowledge of the surface composition is also crucial to unambiguously identify the source minerals for each of the constituents of the Mercury.s exosphere The instrument has been realized by Selex-ES under the contract and management of the Italian Space Agency (ASI) that have signed an MoU with CNES for the development of VIHI Proximity Electronics, the Main Electronics, and the instrument final calibration . All the realization and calibration has been carried on under the scientific supervision of the SIMBIO-SYS science team SIMBIOSYS has been delivered to ESA on April 2015 for the final integration on the BepiColombo MPO spacecraft.

  6. Opportunity Surroundings on Sol 1687 Stereo

    NASA Image and Video Library

    2009-01-05

    NASA Mars Exploration Rover Opportunity combined images into this stereo, 360-degree view of the rover surroundings on Oct. 22, 2008. Opportunity position was about 300 meters southwest of Victoria. 3D glasses are necessary to view this image.

  7. Surface Location In Scene Content Analysis

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.

    1981-12-01

    The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.

  8. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  9. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  10. Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2017-08-01

    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1 : 1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.

  11. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  12. The Effect of Incidence Angle on Stereo DTM Quality: Simulations in Support of Europa Clipper

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Hare, T. M.; Jorda, L.

    2014-12-01

    Many quality factors for digital topographic models (DTMs) from stereo imaging can be predicted geometrically. For example, pixel scale is related to instantaneous field of view and to range. DTM resolution can be no better than a few times this pixel scale. Even vertical precision is a known function of the pixel scale and convergence angle, providedthe image quality is high enough that automated image matching reaches its optimal precision (~0.2 pixel). The influence of incidence angle is harder to predict. Reduced quality is expected both at low incidence (where topographic shading disappears) and high incidence (where signal/noise ratio is low and shadows occur). This problem is of general interest, but especially critical for the Europa Clipper mission profile. Clipper would obtain a radar sounding profile on each Europa flyby. Stereo images collected simultaneously would be used to produce a DTM needed to distinguish off-nadir surface echos (clutter) from subsurface features. The question is, how much of this DTM strip will be useful, given that incidence angle will vary substantially? We are using simulations to answer this question. We produced a 210 m/post DTM of the Castalia Macula region of Europa from 6 Galileo images by photoclinometry. A low-incidence image was used to correct for albedo variations before photoclinometry. We are using the image simulation software OASIS to generate synthetic stereopairs of the region at a full range of incidence angles. These images will be realistic in terms of image resolution, noise, photometry including albedo variations (based on the low incidence image), and cast shadows. The pairs will then be analyzed with the commercial stereomapping software SOCET SET (® BAE Systems), which we have used for a wide variety of planetary mapping projects. Comparing the stereo-derived DTMs to the input ("truth") DTM will allow us to quantify the dependence of true DTM resolution and vertical precision on illumination, and to document the qualitative ways that DTMs degrade at high and low incidence angles. This methodology is immediately applicable to other planetary targets, and in particular can be used to address how much difference in illumination can be tolerated in stereopairs that are not (as for Clipper) acquired simultaneously.

  13. STEREO's View

    NASA Image and Video Library

    2017-12-08

    STEREO witnessed the March 5, 2013, CME from the side of the sun – Earth is far to the left of this picture. While the SOHO images show a halo CME, STEREO shows the CME clearly moving away from Earth. Credit: NASA/STEREO --- CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  14. Towards surface analysis on diabetic feet soles to predict ulcerations using photometric stereo

    NASA Astrophysics Data System (ADS)

    Liu, Chanjuan; van der Heijden, Ferdi; van Netten, Jaap J.

    2012-03-01

    Diabetic foot ulceration is a major complication for patients with diabetes mellitus. Approximately 15% to 25% of patients with Type I and Type II diabetes eventually develop feet ulcers. If not adequately treated, these ulcers may lead to foot infection, and ultimately to total (or partial) lower extremity amputation, which means a great loss in health-related quality of life. The incidence of foot ulcers may be prevented by early identification and subsequent treatment of pre-signs of ulceration, such as callus formation, redness, fissures, and blisters. Therefore, frequent examination of the feet is necessary, preferably on a daily basis. However, self-examination is difficult or impossible due to consequences of the diabetes. Moreover, frequent examination by health care professionals is costly and not feasible. The objective of our project is to develop an intelligent telemedicine monitoring system that can be deployed at the patients' home environment for frequent examination of patients feet, to timely detect pre-signs of ulceration. The current paper reports the preliminary results of an implementation of a photometric stereo imaging system to detect 3D geometric abnormalities of the skin surfaces of foot soles. Using a flexible experimental setup, the system parameters such as number and positions of the illuminators have been selected so as to optimize the performance with respect to reconstructed surface. The system has been applied to a dummy foot sole. Finally, the curvature on the resulting 3D topography of the foot sole is implemented to show the feasibility of detecting the pre-signs of ulceration using photometric stereo imaging. The obtained results indicate clinical potential of this technology for detecting the pre-signs of ulceration on diabetic feet soles.

  15. Photogrammetric Processing Using ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Kornus, W.; Magariños, A.; Pla, M.; Soler, E.; Perez, F.

    2015-03-01

    This paper evaluates the stereoscopic capacities of the Chinese sensor ZiYuan-3 (ZY-3) for the generation of photogrammetric products. The satellite was launched on January 9, 2012 and carries three high-resolution panchromatic cameras viewing in forward (22º), nadir (0º) and backward direction (-22º) and an infrared multi-spectral scanner (IRMSS), which is slightly looking forward (6º). The ground sampling distance (GSD) is 2.1m for the nadir image, 3.5m for the two oblique stereo images and 5.8m for the multispectral image. The evaluated ZY-3 imagery consists of a full set of threefold-stereo and a multi-spectral image covering an area of ca. 50km x 50km north-west of Barcelona, Spain. The complete photogrammetric processing chain was executed including image orientation, the generation of a digital surface model (DSM), radiometric image correction, pansharpening, orthoimage generation and digital stereo plotting. All 4 images are oriented by estimating affine transformation parameters between observed and nominal RPC (rational polynomial coefficients) image positions of 17 ground control points (GCP) and a subsequent calculation of refined RPC. From 10 independent check points RMS errors of 2.2m, 2.0m and 2.7m in X, Y and H are obtained. Subsequently, a DSM of 5m grid spacing is generated fully automatically. A comparison with the Lidar data results in an overall DSM accuracy of approximately 3m. In moderate and flat terrain higher accuracies in the order of 2.5m and better are achieved. In a next step orthoimages from the high resolution nadir image and the multispectral image are generated using the refined RPC geometry and the DSM. After radiometric corrections a fused high resolution colour orthoimage with 2.1m pixel size is created using an adaptive HSL method. The pansharpen process is performed after the individual geocorrection due to the different viewing angles between the two images. In a detailed analysis of the colour orthoimage artifacts are detected covering an area of 4691ha, corresponding to less than 2% of the imaged area. Most of the artifacts are caused by clouds (4614ha). A minor part (77ha) is affected by colour patch, stripping or blooming effects. For the final qualitative analysis on the usability of the ZY-3 imagery for stereo plotting purposes stereo combinations of the nadir and an oblique image are discarded, mainly due to the different pixel size, which produces difficulties in the stereoscopic vision and poor accuracy in positioning and measuring. With the two oblique images a level of detail equivalent to 1:25.000 scale is achieved for transport network, hydrography, vegetation and elements to model the terrain as break lines. For settlement, including buildings and other constructions a lower level of detail is achieved equivalent to 1:50.000 scale.

  16. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  17. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  18. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    NASA Astrophysics Data System (ADS)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  19. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  20. Burns Cliff in Color Stereo

    NASA Image and Video Library

    2006-07-10

    NASA Mars Exploration Rover Opportunity captured a sweeping stereo image of Burns Cliff after driving right to the base of this southeastern portion of the inner wall of Endurance Crater in November 2004. 3D glasses are necessary to view this image.

  1. Compiling Mercury relief map using several data sources

    NASA Astrophysics Data System (ADS)

    Zakharova, Maria; Lazarev, Evgeniy

    2015-04-01

    There are several data of Mercury topography obtained as the result of processing materials collected by two spacecrafts - the Mariner-10 and the MESSENGER during their Mercury flybys. The history of the visual mapping of the Mercury begins at the recent times as the first significant observations were made during the latter half of the 20th century, whereas today we have no data with 100% coverage for the entire surface of the Mercury except the global mosaic composed of the images acquired by MESSENGER. The Mercury relief map has been created with the help of four different types of data: - global mosaic with 100% coverage of Mercury's surface created by using MESSENGER orbital images (30% of the final map); - Digital Terrain Models obtained by the treating stereo images made during the Mariner 10's flybys (10% of the map) (Cook and Robinson, 2000); - Digital Terrain Models obtained from images acquired during the Messenger flybys (20% of the map) (F. Preusker et al., 2011); - the data sets produced by the MESSENGER Mercury Laser Altimeter (MLA) (40 % of the map). The main objective of this work is to collect, combine and process the existing data and then to merge them correctly for one single map compiling. The final map is created in the Lambert azimuthal Equal area projection and mainly shows the hypsometric features of the planet. It represents two hemispheres - western and eastern. In order not to divide data sources the eastern hemisphere takes an interval from 50 degrees east longitude to 130 degrees west longitude and the western one takes respectively the interval from 130 degrees west longitude to 50 degrees east longitude. References: Global mosaics of Mercury's surface. Available mosaics include one created prior to MESSENGER's orbital operations, high resolution versions that use MESSENGER's orbital images that are available in NASA's Planetary Data System (PDS) (http://messenger.jhuapl.edu/the_mission/mosaics.html). Cook, A.C., Robinson, M.S., 2000. Mariner 10 stereo image coverage of Mercury. J. Geophys. Res. 105, 9429-9443. Preusker, F., Oberst, J., Head, J.W., Watters, T.R., Robinson, M.S., Zuber, M.T., Solomon, S.C., 2010. Stereo topographic models of Mercury after three MESSENGER flybys. Planetary and Space Science 59 (2011), 1910-1917. The MLA is a time-of-flight laser rangefinder that uses direct detection and pulse-edge timing to determine precisely the range from the MESSENGER spacecraft to Mercury's surface (http://pds-geosciences.wustl.edu/missions/messenger/mla.htm).

  2. Modeling vegetation heights from high resolution stereo aerial photography: an application for broad-scale rangeland monitoring.

    PubMed

    Gillan, Jeffrey K; Karl, Jason W; Duniway, Michael; Elaksher, Ahmed

    2014-11-01

    Vertical vegetation structure in rangeland ecosystems can be a valuable indicator for assessing rangeland health and monitoring riparian areas, post-fire recovery, available forage for livestock, and wildlife habitat. Federal land management agencies are directed to monitor and manage rangelands at landscapes scales, but traditional field methods for measuring vegetation heights are often too costly and time consuming to apply at these broad scales. Most emerging remote sensing techniques capable of measuring surface and vegetation height (e.g., LiDAR or synthetic aperture radar) are often too expensive, and require specialized sensors. An alternative remote sensing approach that is potentially more practical for managers is to measure vegetation heights from digital stereo aerial photographs. As aerial photography is already commonly used for rangeland monitoring, acquiring it in stereo enables three-dimensional modeling and estimation of vegetation height. The purpose of this study was to test the feasibility and accuracy of estimating shrub heights from high-resolution (HR, 3-cm ground sampling distance) digital stereo-pair aerial images. Overlapping HR imagery was taken in March 2009 near Lake Mead, Nevada and 5-cm resolution digital surface models (DSMs) were created by photogrammetric methods (aerial triangulation, digital image matching) for twenty-six test plots. We compared the heights of individual shrubs and plot averages derived from the DSMs to field measurements. We found strong positive correlations between field and image measurements for several metrics. Individual shrub heights tended to be underestimated in the imagery, however, accuracy was higher for dense, compact shrubs compared with shrubs with thin branches. Plot averages of shrub height from DSMs were also strongly correlated to field measurements but consistently underestimated. Grasses and forbs were generally too small to be detected with the resolution of the DSMs. Estimates of vertical structure will be more accurate in plots having low herbaceous cover and high amounts of dense shrubs. Through the use of statistically derived correction factors or choosing field methods that better correlate with the imagery, vegetation heights from HR DSMs could be a valuable technique for broad-scale rangeland monitoring needs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Modeling vegetation heights from high resolution stereo aerial photography: an application for broad-scale rangeland monitoring

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Duniway, Michael; Elaksher, Ahmed

    2014-01-01

    Vertical vegetation structure in rangeland ecosystems can be a valuable indicator for assessing rangeland health and monitoring riparian areas, post-fire recovery, available forage for livestock, and wildlife habitat. Federal land management agencies are directed to monitor and manage rangelands at landscapes scales, but traditional field methods for measuring vegetation heights are often too costly and time consuming to apply at these broad scales. Most emerging remote sensing techniques capable of measuring surface and vegetation height (e.g., LiDAR or synthetic aperture radar) are often too expensive, and require specialized sensors. An alternative remote sensing approach that is potentially more practical for managers is to measure vegetation heights from digital stereo aerial photographs. As aerial photography is already commonly used for rangeland monitoring, acquiring it in stereo enables three-dimensional modeling and estimation of vegetation height. The purpose of this study was to test the feasibility and accuracy of estimating shrub heights from high-resolution (HR, 3-cm ground sampling distance) digital stereo-pair aerial images. Overlapping HR imagery was taken in March 2009 near Lake Mead, Nevada and 5-cm resolution digital surface models (DSMs) were created by photogrammetric methods (aerial triangulation, digital image matching) for twenty-six test plots. We compared the heights of individual shrubs and plot averages derived from the DSMs to field measurements. We found strong positive correlations between field and image measurements for several metrics. Individual shrub heights tended to be underestimated in the imagery, however, accuracy was higher for dense, compact shrubs compared with shrubs with thin branches. Plot averages of shrub height from DSMs were also strongly correlated to field measurements but consistently underestimated. Grasses and forbs were generally too small to be detected with the resolution of the DSMs. Estimates of vertical structure will be more accurate in plots having low herbaceous cover and high amounts of dense shrubs. Through the use of statistically derived correction factors or choosing field methods that better correlate with the imagery, vegetation heights from HR DSMs could be a valuable technique for broad-scale rangeland monitoring needs.

  4. Picking up Clues from the Discard Pile (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    As NASA's Phoenix Mars Lander excavates trenches, it also builds piles with most of the material scooped from the holes. The piles, like this one called 'Caterpillar,' provide researchers some information about the soil.

    On Aug. 24, 2008, during the late afternoon of the 88th Martian day after landing, Phoenix's Surface Stereo Imager took separate exposures through its left eye and right eye that have been combined into this stereo view. The image appears three dimensional when seen through red-blue glasses.

    This conical pile of soil is about 10 centimeters (4 inches) tall. The sources of material that the robotic arm has dropped onto the Caterpillar pile have included the 'Dodo' and ''Upper Cupboard' trenches and, more recently, the deeper 'Stone Soup' trench.

    Observations of the pile provide information, such as the slope of the cone and the textures of the soil, that helps scientists understand properties of material excavated from the trenches.

    For the Stone Soup trench in particular, which is about 18 centimeters (7 inches) deep, the bottom of the trench is in shadow and more difficult to observe than other trenches that Phoenix has dug. The Phoenix team obtained spectral clues about the composition of material from the bottom of Stone Soup by photographing Caterpillar through 15 different filters of the Surface Stereo Imager when the pile was covered in freshly excavated material from the trench.

    The spectral observation did not produce any sign of water-ice, just typical soil for the site. However, the bigger clumps do show a platy texture that could be consistent with elevated concentration of salts in the soil from deep in Stone Soup. The team chose that location as the source for a soil sample to be analyzed in the lander's wet chemistry laboratory, which can identify soluble salts in the soil.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  5. Left Limb of North Pole of the Sun, March 20, 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: Left eye view of a stereo pair Click on the image for full resolution TIFF Figure 2: Right eye view of a stereo pair Click on the image for full resolution TIFF Figure 1: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun. This location enables us to view the Sun from the position of a virtual left eye in space. Figure 2: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-A spacecraft. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space.

    NASA's Solar TErrestrial RElations Observatory (STEREO) satellites have provided the first three-dimensional images of the Sun. For the first time, scientists will be able to see structures in the Sun's atmosphere in three dimensions. The new view will greatly aid scientists' ability to understand solar physics and thereby improve space weather forecasting.

    The EUVI imager is sensitive to wavelengths of light in the extreme ultraviolet portion of the spectrum. EUVI bands at wavelengths of 304, 171 and 195 Angstroms have been mapped to the red blue and green visible portion of the spectrum; and processed to emphasize the temperature difference of the solar material.

    STEREO, a two-year mission, launched October 2006, will provide a unique and revolutionary view of the Sun-Earth System. The two nearly identical observatories -- one ahead of Earth in its orbit, the other trailing behind -- will trace the flow of energy and matter from the Sun to Earth. They will reveal the 3D structure of coronal mass ejections; violent eruptions of matter from the sun that can disrupt satellites and power grids, and help us understand why they happen. STEREO will become a key addition to the fleet of space weather detection satellites by providing more accurate alerts for the arrival time of Earth-directed solar ejections with its unique side-viewing perspective.

    STEREO is the third mission in NASA's Solar Terrestrial Probes program within NASA's Science Mission Directorate, Washington. The Goddard Science and Exploration Directorate manages the mission, instruments, and science center. The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., designed and built the spacecraft and is responsible for mission operations. The imaging and particle detecting instruments were designed and built by scientific institutions in the U.S., UK, France, Germany, Belgium, Netherlands, and Switzerland. JPL is a division of the California Institute of Technology in Pasadena.

  6. An overview of the stereo correlation and triangulation formulations used in DICe.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Daniel Z.

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  7. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  8. Oil Fire Plumes Over Baghdad

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.

    The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  9. MESUR Pathfinder Science Investigations

    NASA Technical Reports Server (NTRS)

    Golombek, M.

    1993-01-01

    The MESUR (Mars Environmental Survey) Pathfinder mission is the first Discovery mission planned for launch in 1996. MESUR Pathfinder is designed as an engineering demonstration of the entry, descent and landing approach to be employed by the follow-on MESUR Network mission, which will land of order 10 small stations on the surface of Mars to investigate interior, atmospheric and surface properties. Pathfinder is a small Mars lander, equipped with a microrover to deploy instruments and explore the local landing site. Instruments selected for Pathfinder include a surface imager on a 1 m pop-up mast (stereo with spectral filters), an atmospheric structure instrument/surface meteorology package, and an alpha proton x-ray spectrometer. The microrover will carry the alpha proton x-ray spectrometer to a number of different rocks and surface materials and provide close-up imaging...

  10. The Use of Sun Elevation Angle for Stereogrammetric Boreal Forest Height in Open Canopies

    NASA Technical Reports Server (NTRS)

    Montesano, Paul M.; Neigh, Christopher; Sun, Guoqing; Duncanson, Laura Innice; Van Den Hoek, Jamon; Ranson, Kenneth Jon

    2017-01-01

    Stereogrammetry applied to globally available high resolution spaceborne imagery (HRSI; less than 5 m spatial resolution) yields fine-scaled digital surface models (DSMs) of elevation. These DSMs may represent elevations that range from the ground to the vegetation canopy surface, are produced from stereoscopic image pairs (stereo pairs) that have a variety of acquisition characteristics, and have been coupled with lidar data of forest structure and ground surface elevation to examine forest height. This work explores surface elevations from HRSI DSMs derived from two types of acquisitions in open canopy forests. We (1) apply an automated mass-production stereogrammetry workflow to along-track HRSI stereo pairs, (2) identify multiple spatially coincident DSMs whose stereo pairs were acquired under different solar geometry, (3) vertically co-register these DSMs using coincident spaceborne lidar footprints (from ICESat-GLAS) as reference, and(4) examine differences in surface elevations between the reference lidar and the co-registered HRSI DSMs associated with two general types of acquisitions (DSM types) from different sun elevation angles. We find that these DSM types, distinguished by sun elevation angle at the time of stereo pair acquisition, are associated with different surface elevations estimated from automated stereogrammetry in open canopy forests. For DSM values with corresponding reference ground surface elevation from spaceborne lidar footprints in open canopy northern Siberian Larix forests with slopes less than10, our results show that HRSI DSM acquired with sun elevation angles greater than 35deg and less than 25deg (during snow-free conditions) produced characteristic and consistently distinct distributions of elevation differences from reference lidar. The former include DSMs of near-ground surfaces with root mean square errors less than 0.68 m relative to lidar. The latter, particularly those with angles less than 10deg, show distributions with larger differences from lidar that are associated with open canopy forests whose vegetation surface elevations are captured. Terrain aspect did not have a strong effect on the distribution of vegetation surfaces. Using the two DSM types together, the distribution of DSM-differenced heights in forests (6.0 m, sigma = 1.4 m) was consistent with the distribution of plot-level mean tree heights (6.5m, sigma = 1.2 m). We conclude that the variation in sun elevation angle at time of stereo pair acquisition can create illumination conditions conducive for capturing elevations of surfaces either near the ground or associated with vegetation canopy. Knowledge of HRSI acquisition solar geometry and snow cover can be used to understand and combine stereogrammetric surface elevation estimates to co-register rand difference overlapping DSMs, providing a means to map forest height at fine scales, resolving the vertical structure of groups of trees from spaceborne platforms in open canopy forests.

  11. Stereo-PIV study of flow inside an eye under cataract surgery

    NASA Astrophysics Data System (ADS)

    Sakakibara, Jun; Yamashita, Masaki; Kobayashi, Tatsuya; Kaji, Yuichi; Oshika, Tetsuro

    2012-04-01

    We measured velocity distributions in the anterior chamber of porcine eyes under simulated cataract surgery using stereoscopic particle image velocimetry (stereo-PIV). The surface of the cornea was detected based on the images of laser-induced fluorescent light emitted from fluorescent dye solution introduced in a posterior chamber. A coaxial phacoemulsification procedure was simulated with standard size (standard coaxial phacoemulsification) and smaller (micro coaxial phacoemulsification) surgical instruments. In both cases, an asymmetric flow rate of irrigation was observed, although both irrigation ports had the same dimensions prior to insertion into the eye. In cases where the tip of the handpiece was placed farther away from the top of the cornea, i.e., closer to the crystalline lens, direct impingement of irrigation flow onto the cornea surface was avoided and the flow turned back toward the handpiece along the surface of the corneal endothelium. Viscous shear stress on the corneal endothelium was computed based on the measured mean velocity distribution. The maximum shear stress for most cases exceeded 0.1 Pa, which is comparable to the shear stress that caused detachment of the corneal endothelial cells reported by Kaji et al. in Cornea 24:S55-S58, (2005). When direct impingement of the irrigation flow was avoided, the shear stress was reduced considerably.

  12. Quasi-microscope concept for planetary missions.

    PubMed

    Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D

    1977-09-01

    Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.

  13. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  14. Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Davila, Joseph M.; SaintCyr, O. C.

    2003-01-01

    The solar magnetic field is constantly generated beneath the surface of the Sun by the solar dynamo. To balance this flux generation, there is constant dissipation of magnetic flux at and above the solar surface. The largest phenomenon associated with this dissipation is the Coronal Mass Ejection (CME). The Solar and Heliospheric Observatory (SOHO) has provided remarkable views of the corona and CMEs, and served to highlight how these large interplanetary disturbances can have terrestrial consequences. STEREO is the next logical step to study the physics of CME origin, propagation, and terrestrial effects. Two spacecraft with identical instrument complements will be launched on a single launch vehicle in November 2007. One spacecraft will drift ahead and the second behind the Earth at a separation rate of 22 degrees per year. Observation from these two vantage points will for the first time allow the observation of the three-dimensional structure of CMEs and the coronal structures where they originate. Each STEREO spacecraft carries a complement of 10 instruments, which include (for the first time) an extensive set of both remote sensing and in-situ instruments. The remote sensing suite is capable of imaging CMEs from the solar surface out to beyond Earth's orbit (1 AU), and in-situ instruments are able to measure distribution functions for electrons, protons, and ions over a broad energy range, from the normal thermal solar wind plasma to the most energetic solar particles. It is anticipated that these studies will ultimately lead to an increased understanding of the CME process and provide unique observations of the flow of energy from the corona to the near-Earth environment. An international research program, the International Heliophysical Year (IHY) will provide a framework for interpreting STEREO data in the context of global processes in the Sun-Earth system.

  15. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  16. SAD5 Stereo Correlation Line-Striping in an FPGA

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopoulos, Arin C.

    2011-01-01

    High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms in addition to SAD5 to be run on the same FPGA.

  17. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  18. Space-time properties of wind-waves: a new look at directional wave distributions

    NASA Astrophysics Data System (ADS)

    Leckler, Fabien; Ardhuin, Fabrice; Benetazzo, Alvise; Fedele, Francesco; Bergamasco, Filippo; Dulov, Vladimir

    2014-05-01

    Few accurate observed directional wave spectra are available in the literature at spatial scales ranging between 0.5 and 5.0 m. These intermediate wave scales, relevant for air-sea fluxes and remote sensing are also expected to feed back on the dominant wave properties through wave generation. These wave scales can be prolifically investigated using the well-known optical stereo methods that provides, from a couple of synchronized images, instantaneous representation of wave elevations over a given sea surface. Thus, two stereo systems (the so-called Wave Acquisition Stereo Systems, WASS) were deployed on top of the deep-water platform at Katsiveli, in the Black Sea, in September 2011 and 2013. From image pairs taken by the couple of synchronized high-resolution cameras, ocean surfaces have been reconstructed by stereo-triangulation. Here we analyze sea states corresponding to mean wind speeds of 11 to 14 m/s, and young wave ages of 0.35 to 0.42, associated to significant wave heights of 0.3 to 0.55m. As a result, four 12 Hz time evolutions of sea surface elevation maps with areas about 10 x 10 m2 have been obtained for sequence durations ranging between 15 and 30 minutes, and carefully validated with nearby capacitance wave gauges. The evolving free surfaces elevations were processed into frequency-wavenumber-direction 3D spectra. We found that wave energy chiefly follows the dispersion relation up to frequency of 1.6Hz and wavenumber of 10 rad/m, corresponding to wavelength of about 0.5 m. These spectra also depict well the energy contribution from non-linear waves, which is quantified and compared to theory. A strong bi-modality of the linear spectra was also observed, with the angle of the two maxima separated by about 160 degrees. Furthermore, spectra also exhibit the bimodality of the non-linear part. Integrated over positive frequencies to obtain wavenumber spectra unambiguous in direction, the bimodality of the spectra is partially hidden by the energy from second order waves, in particular from wave harmonics of the peak waves. However, the obtained spreading functions and integrals question the isotropy of the spectrum at high frequencies, generally assumed to explain deep water pressure measurement.

  19. Comparison of glacier loss on Qori Kalis, Peru and Mt. Kilimanjaro, Tanzania over the last decade using digital photogrammetry and stereo analysis

    NASA Astrophysics Data System (ADS)

    Lamantia, K.

    2017-12-01

    Rising global temperatures have created cause for concern, particularly among those who study the world's glaciers. Given their high sensitivity to climate change tropical glaciers can be used not only as indicators of change but can provide information necessary for more accurate interpretations of the mechanisms driving climate change. In the past, measurements of glacier extent changes such as for the Qori Kalis Glacier in Peru have been based on terrestrial photography and hand-plotted photogrammetry. Recent technological advances now provide an opportunity to modify the way these glaciers are observed and measured. New developments have opened doors for digital photogrammetry software such as the Leica Photogrammetry Suite and stereo analyst from ERDAS, which offers stereoscopic tools with the ability to plot the ice extent in a three dimensional image. At least two images from different perspectives are required to create the file for stereo analysis. The resulting three-dimensional digital content will offer more flexibility in analysis, quantification, and visualization for better documentation of retreating glaciers. It is possible to produce both two-and three-dimensional surface area estimations for glaciers such as Qori Kalis and the Kilimanjaro ice fields. Beyond a surface area measurement, the software also possesses the capability to create contours for the surface of the glacier as well as view and analyze properties such as slope and aspect. The surface area measurements taken with the digital method are compared with the hand-plotted measurements made in the past and are found to be comparable. A comparison of glacier loss over time as well as a comparison between both tropical locations, will be presented and should provide better insight to the drivers that are influencing current glacier loss. Making the transition from terrestrial, to aerial, and now to satellite imagery provides a simpler method for accessing and assessing changes in glaciated regions of the world.

  20. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  1. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Nakabayashi, S; Kumagai, S

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less

  2. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  3. Tracking Topographic Changes from Multitemporal Stereo Images, Application to the Nili Patera Dune Field

    NASA Astrophysics Data System (ADS)

    Avouac, J.; Ayoub, F.; Bridges, N. T.; Leprince, S.; Lucas, A.

    2012-12-01

    The High Resolution Imaging Science Experiment (HiRISE) in orbit around Mars provides images with a nominal ground resolution of 25cm. Its agility allows imaging a same scene with stereo view angles thus allowing for for Digital elevation Model (DEM) extraction through stereo-photogrammetry. This dataset thus offers an exceptional opportunity to measure the topography with high precision and track its eventual evolution with time. In this presentation, we will discuss how multi-temporal acquisitions of HiRISE images of the Nili Patera dune field allow tracking ripples migration, assess sand fluxes and dunes activity. We investigated in particular the use of multi-temporal DEMs to monitor the migration and morphologic evolution of the dune field. We present here the methodology used and the various challenges that must be overcome to best exploit the multi-temporal images. Two DEMs were extracted from two stereo images pairs acquired 390 earth days apart in 2010-2011 using SOCET SET photogrammetry software, with a 1m post-spacing and a vertical accuracy of few tens of centimeters. Prior to comparison the DEMs registration, which was not precise enough out of SOCET-SET, was improved by wrapping the second DEM onto the first one using the bedrock only as a support for registration. The vertical registration residual was estimated at around 40cm RMSE and is mostly due to CCD misalignment and uncorrected spacecraft attitudes. Changes of elevation over time are usually determined from DEMs differentiation: provided that DEMs are perfectly registered and sampled on the same grid, this approach readily quantifies erosion and deposition processes. As the dunes have moved horizontally, they are not physically aligned anymore in the DEMs, and their morphologic evolution cannot be recovered easily from differentiating the DEMs. In this particular setting the topographic evolution is best recovered from correlation of the DEMs. We measure that the fastest dunes have migrated by up to 1meter per Earth year as a result of lee front deposition and stoss slope erosion. DEMs differentiation, after correction for horizontal migration, provides and additional information on dune morphology evolution. Some dunes show a vertical growth over the 390 days spanning the 2 DEMs, but we cannot exclude a bias due to the acquisition parameters. Indeed, the images of the two stereo pairs were acquired 22 and 5 days apart, respectively. During that time, the ripples laying on the dune surface have probably migrated. As the DEMs extraction is based on feature tracking and parallax, this difference in DEMs elevation may be only, or in part, due to the ripple migration between the acquisition times that biased the actual dune elevations.

  4. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  5. In Brief: NASA's Phoenix spacecraft lands on Mars

    NASA Astrophysics Data System (ADS)

    Showstack, Randy; Kumar, Mohi

    2008-06-01

    After a 9.5-month, 679-million-kilometer flight from Florida, NASA's Phoenix spacecraft made a soft landing in Vastitas Borealis in Mars's northern polar region on 25 May. The lander, whose camera already has returned some spectacular images, is on a 3-month mission to examine the area and dig into the soil of this site-chosen for its likelihood of having frozen water near the surface-and analyze samples. In addition to a robotic arm and robotic arm camera, the lander's instruments include a surface stereo imager; thermal and evolved-gas analyzer; microscopy, electrochemistry, and conductivity analyzer; and a meteorological station that is tracking daily weather and seasonal changes.

  6. Martian Plain in Late Summer

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Surface Stereo Imager on NASA's Mars Phoenix Lander acquired this view of the textured plain near the lander at about 11 a.m. local Mars solar time during the mission's 124th Martian day, or sol (Sept. 29, 2008).

    The image was taken through an infrared filter. The brighter patches are dustier than darker areas of the surface.

    The last signal from the lander came on Nov. 2, 2008.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Stereoscopic image production: live, CGI, and integration

    NASA Astrophysics Data System (ADS)

    Criado, Enrique

    2006-02-01

    This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.

  8. 3D optic disc reconstruction via a global fundus stereo algorithm.

    PubMed

    Bansal, M; Sizintsev, M; Eledath, J; Sawhney, H; Pearson, D J; Stone, R A

    2013-01-01

    This paper presents a novel method to recover 3D structure of the optic disc in the retina from two uncalibrated fundus images. Retinal images are commonly uncalibrated when acquired clinically, creating rectification challenges as well as significant radiometric and blur differences within the stereo pair. By exploiting structural peculiarities of the retina, we modified the Graph Cuts computational stereo method (one of current state-of-the-art methods) to yield a high quality algorithm for fundus stereo reconstruction. Extensive qualitative and quantitative experimental evaluation (where OCT scans are used as 3D ground truth) on our and publicly available datasets shows the superiority of the proposed method in comparison to other alternatives.

  9. The MVACS Surface Stereo Imager on Mars Polar Lander

    NASA Astrophysics Data System (ADS)

    Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.

    2001-08-01

    The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The f18 optics have a large depth of field, and no focusing mechanism is required; a mechanical shutter is avoided by using the frame transfer capability of the 528 × 512 CCD. The resolving power of the camera, 0.975 mrad/pixel, is the same as the Imager for Mars Pathfinder camera, of which it is nearly an exact copy. Specially designed targets are positioned on the Lander; they provide information on the magnetic properties of wind-blown dust, and radiometric standards for calibration. Several experiments beyond the requisite color panorama are described in detail: contour mapping of the local terrain, multispectral imaging of interesting features (possibly with ice or frost in shaded spots) to study local mineralogy, and atmospheric imaging to constrain the properties of the haze and clouds. Eight low-transmission filters are included for imaging the Sun directly at multiple wavelengths to give SSI the ability to measure dust opacity and potentially the water vapor content. This paper is intended to document the functionality and calibration of the SSI as flown on the failed lander.

  10. Computer processing of Mars Odyssey THEMIS IR imaging, MGS MOLA altimetry and Mars Express stereo imaging to locate Airy-0, the Mars prime meridian reference

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Neukum, Gerhard; Smith, David E.; Christensen, Philip; Neumann, Gregory; Albee, Arden; Caplinger, Michael; Seregina, N. V.; Kirk, Randolph L.

    The small crater Airy-0 was selected from Mariner 9 images to be the reference for the Mars prime meridian. Initial analyses were made in year 2000 to tie Viking Orbiter and Mars Orbiter Camera images of Airy-0 to the evolving Mars Orbiter Laser Altimeter global digital terrain model to improve the location accuracy of Airy-0. Based upon this tie and radiometric tracking of landers / rovers from earth, new expressions for the Mars spin axis direction, spin rate and prime meridian epoch value were produced to define the orientation of the Martian surface in inertial space over time. Now that the Mars Global Surveyor mission and the Mars Orbiter Laser Altimeter global digital terrain model are complete, a more exhaustive study has been performed to determine the location of Airy-0 relative to the global terrain grid. THEMIS IR image cubes of the Airy and Gale crater regions were tied to the global terrain grid using precision stereo photogrammetric image processing techniques. The Airy-0 location was determined to be within 50 meters of the currently defined IAU prime meridian, with this offset at the limiting absolute accuracy of the global terrain grid. Additional outputs of this study were a controlled multi-band photomosaic of Airy, precision alignment and geometric models of the ten THEMIS IR bands and a controlled multi-band photomosaic of Gale crater used to validate the Mars Surface Laboratory operational map products supporting their successful landing on Mars.

  11. Airborne camera and spectrometer experiments and data evaluation

    NASA Astrophysics Data System (ADS)

    Lehmann, F. F.; Bucher, T.; Pless, S.; Wohlfeil, J.; Hirschmüller, H.

    2009-09-01

    New stereo push broom camera systems have been developed at German Aerospace Centre (DLR). The new small multispectral systems (Multi Functional Camerahead - MFC, Advanced Multispectral Scanner - AMS) are light weight, compact and display three or five RGB stereo lines of 8000, 10 000 or 14 000 pixels, which are used for stereo processing and the generation of Digital Surface Models (DSM) and near True Orthoimage Mosaics (TOM). Simultaneous acquisition of different types of MFC-cameras for infrared and RGB data has been successfully tested. All spectral channels record the image data in full resolution, pan-sharpening is not necessary. Analogue to the line scanner data an automatic processing chain for UltraCamD and UltraCamX exists. The different systems have been flown for different types of applications; main fields of interest among others are environmental applications (flooding simulations, monitoring tasks, classification) and 3D-modelling (e.g. city mapping). From the DSM and TOM data Digital Terrain Models (DTM) and 3D city models are derived. Textures for the facades are taken from oblique orthoimages, which are created from the same input data as the TOM and the DOM. The resulting models are characterised by high geometric accuracy and the perfect fit of image data and DSM. The DLR is permanently developing and testing a wide range of sensor types and imaging platforms for terrestrial and space applications. The MFC-sensors have been flown in combination with laser systems and imaging spectrometers and special data fusion products have been developed. These products include hyperspectral orthoimages and 3D hyperspectral data.

  12. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  13. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  14. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  15. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  16. MAGNETIC FLUX TRANSPORT AND THE LONG-TERM EVOLUTION OF SOLAR ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugarte-Urra, Ignacio; Upton, Lisa; Warren, Harry P.

    2015-12-20

    With multiple vantage points around the Sun, Solar Terrestrial Relations Observatory (STEREO) and Solar Dynamics Observatory imaging observations provide a unique opportunity to view the solar surface continuously. We use He ii 304 Å data from these observatories to isolate and track ten active regions and study their long-term evolution. We find that active regions typically follow a standard pattern of emergence over several days followed by a slower decay that is proportional in time to the peak intensity in the region. Since STEREO does not make direct observations of the magnetic field, we employ a flux-luminosity relationship to infermore » the total unsigned magnetic flux evolution. To investigate this magnetic flux decay over several rotations we use a surface flux transport model, the Advective Flux Transport model, that simulates convective flows using a time-varying velocity field and find that the model provides realistic predictions when information about the active region's magnetic field strength and distribution at peak flux is available. Finally, we illustrate how 304 Å images can be used as a proxy for magnetic flux measurements when magnetic field data is not accessible.« less

  17. Photometric imaging in particle size measurement and surface visualization.

    PubMed

    Sandler, Niklas

    2011-09-30

    The aim of this paper is to give an insight into photometric particle sizing approaches, which differ from the typical particle size measurement of dispersed particles. These approaches can often be advantageous especially for samples that are moist or cohesive, when dispersion of particles is difficult or sometimes impossible. The main focus of this paper is in the use of photometric stereo imaging. The technique allows the reconstruction of three-dimensional images of objects using multiple light sources in illumination. The use of photometric techniques is demonstrated in at-line measurement of granules and on-line measurement during granulation and dry milling. Also, surface visualization and roughness measurements are briefly discussed. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.

    2013-01-01

    In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.

  19. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  20. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  1. Bifurcations in two-image photometric stereo for orthogonal illuminations

    NASA Astrophysics Data System (ADS)

    Kozera, R.; Prokopenya, A.; Noakes, L.; Śluzek, A.

    2017-07-01

    This paper discusses the ambiguous shape recovery in two-image photometric stereo for a Lambertian surface. The current uniqueness analysis refers to linearly independent light-source directions p = (0, 0, -1) and q arbitrary. For this case necessary and sufficient condition determining ambiguous reconstruction is governed by a second-order linear partial differential equation with constant coefficients. In contrast, a general position of both non-colinear illumination directions p and q leads to a highly non-linear PDE which raises a number of technical difficulties. As recently shown, the latter can also be handled for another family of orthogonal illuminations parallel to the OXZ-plane. For the special case of p = (0, 0, -1) a potential ambiguity stems also from the possible bifurcations of sub-local solutions glued together along a curve defined by an algebraic equation in terms of the data. This paper discusses the occurrence of similar bifurcations for such configurations of orthogonal light-source directions. The discussion to follow is supplemented with examples based on continuous reflectance map model and generated synthetic images.

  2. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  3. Matching methods evaluation framework for stereoscopic breast x-ray images.

    PubMed

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  4. Parametric dense stereovision implementation on a system-on chip (SoC).

    PubMed

    Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L

    2012-01-01

    This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.

  5. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  6. Automated dynamic feature tracking of RSLs on the Martian surface through HiRISE super-resolution restoration and 3D reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.

    2017-09-01

    In this paper, we demonstrate novel Super-resolution restoration and 3D reconstruction tools developed within the EU FP7 projects and their applications to advanced dynamic feature tracking through HiRISE repeat stereo. We show an example with one of the RSL sites in the Palikir Crater took 8 repeat-pass 25cm HiRISE images from which a 5cm RSL-free SRR image is generated using GPT-SRR. Together with repeat 3D modelling of the same area, it allows us to overlay tracked dynamic features onto the reconstructed "original" surface, providing a much more comprehensive interpretation of the surface formation processes in 3D.

  7. Shuttle imaging radar views the Earth from Challenger: The SIR-B experiment

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Cimino, J. B.; Holt, B.; Ruzek, M. R.

    1986-01-01

    In October 1984, SIR-B obtained digital image data of about 6.5 million km2 of the Earth's surface. The coverage is mostly of selected experimental test sites located between latitudes 60 deg north and 60 deg south. Programmed adjustments made to the look angle of the steerable radar antenna and to the flight attitude of the shuttle during the mission permitted collection of multiple-incidence-angle coverage or extended mapping coverage as required for the experiments. The SIR-B images included here are representative of the coverage obtained for scientific studies in geology, cartography, hydrology, vegetation cover, and oceanography. The relations between radar backscatter and incidence angle for discriminating various types of surfaces, and the use of multiple-incidence-angle SIR-B images for stereo measurement and viewing, are illustrated with examples. Interpretation of the images is facilitated by corresponding images or photographs obtained by different sensors or by sketch maps or diagrams.

  8. Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems

    NASA Astrophysics Data System (ADS)

    Mark, David S.; Waste, Corby

    1997-05-01

    The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.

  9. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  10. The PanCam instrument on the 2018 Exomars rover: Scientific objectives

    NASA Astrophysics Data System (ADS)

    Jaumann, Ralf; Coates, Andrew; Hauber, Ernst; Hoffmann, Harald; Schmitz, Nicole; Le Deit, Laetitia; Tirsch, Daniela; Paar, Gerhard; Griffiths, Andrew

    2010-05-01

    The Exomars Panoramic Camera System is an imaging suite of three camera heads to be mounted on the ExoMars rover`s mast, with the boresight 1.8 m above ground. As late as the ExoMars Pasteur Payload Design Review (PDR) in 2009, the PanCam consists of two identical wide angle cameras (WAC) with fixed focal length lenses, and a high resolution camera (HRC) with an automatic focus mechanism, placed adjacent to the right WAC. The WAC stereo pair provides binocular vision for stereoscopic studies as well as 12 filter positions (per camera) for stereoscopic colour imaging and scientific multispectral studies. The stereo baseline of the pair is 500 mm. The two WAC have 22 mm focal length, f/10 lenses that illuminate detectors with 1024 × 1024 pixels. WAC lenses are fixed, with an optimal focus set to 4 m, and a focus ranging from 1.2 m (corresponding to the nearest view of the calibration target on the rover deck) to infinity. The HRC is able to focus between 0.9 m (distance to a drill core on the rover`s sample tray) and infinity. The instantaneous field of views of WAC and HRC are 580 μrad/pixel and 83 μrad/pixel, respectively. The corresponding resolution (in mm/pixel) at a distance of 2 m are 1.2 (WAC) and 0.17 (HRC), at 100 m distance it is 58 (WAC) and 8.3 (HRC). WAC and HRC will be geometrically co-aligned. The main scientific goal of PanCam is the geologic characterisation of the environment in which the rover is operating, providing the context for investigations carried out by the other instruments of the Pasteur payload. PanCam data will serve as a bridge between orbital data (high-resolution images from HRSC, CTX, and HiRISE, and spectrometer data from OMEGA and CRISM) and the data acquired in situ on the Martian surface. The position of HRC on top of the rover`s mast enables the detailed panoramic inspection of surface features over the full horizontal range of 360° even at large distances, an important prerequisite to identify the scientifically most promising targets and to plan the rover`s traverse. Key to success of PanCam is the provision of data that allow the determination of rock lithology, either of boulders on the surface or of outcrops. This task requires high spatial resolution as well as colour capabilities. The stereo images provide complementary information on the three-dimensional properties (i.e. the shape) of rocks. As an example, the degree of rounding of rocks as a result of fluvial transport can reveal the erosional history of the investigated particles, with possible implications on the chronology and intensity of rock-water interaction. The identification of lithology and geological history of rocks will strongly benefit from the co-aligned views of WAC (colour, stereo) and HRC (high spatial resolution), which will ensure that 3D and multispectral information is available together with fine-scale textural information for each scene. Stereo information is also of utmost importance for the determination of outcrop geometry (e.g., strike and dip of layered sequences), which helps to understand the emplacement history of sedimentary and volcanic rocks (e.g., cross-bedding, unconformities, etc.). PanCam will further reveal physical soil properties such as cohesion by imaging sites where the soil is disturbed by the rover`s wheels and the drill. Another essential task of PanCam is the imaging of samples (from the drill) before ingestion into the rover for further analysis by other instruments. PanCam can be tilted vertically and will also study the atmosphere (e.g., dust loading, opacity, clouds) and aeolian processes related to surface-atmosphere interactions, such as dust devils.

  11. Statistical characterization of short wind waves from stereo images of the sea surface

    NASA Astrophysics Data System (ADS)

    Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine

    2013-04-01

    We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.

  12. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  13. Phoenix Carries Soil to Wet Chemistry Lab

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the lander's Robotic Arm scoop positioned over the Wet Chemistry Lab delivery funnel on Sol 29, the 29th Martian day after landing, or June 24, 2008. The soil will be delivered to the instrument on Sol 30.

    This image has been enhanced to brighten the scene.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  14. Real-time registration of video with ultrasound using stereo disparity

    NASA Astrophysics Data System (ADS)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.

  15. Three-dimensional quantitative flow diagnostics

    NASA Technical Reports Server (NTRS)

    Miles, Richard B.; Nosenchuck, Daniel M.

    1989-01-01

    The principles, capabilities, and practical implementation of advanced measurement techniques for the quantitative characterization of three-dimensional flows are reviewed. Consideration is given to particle, Rayleigh, and Raman scattering; fluorescence; flow marking by H2 bubbles, photochromism, photodissociation, and vibrationally excited molecules; light-sheet volume imaging; and stereo imaging. Also discussed are stereo schlieren methods, holographic particle imaging, optical tomography, acoustic and magnetic-resonance imaging, and the display of space-filling data. Extensive diagrams, graphs, photographs, sample images, and tables of numerical data are provided.

  16. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  17. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  18. Preliminary optical design of the stereo channel of the imaging system simbiosys for the BepiColombo ESA mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, Vania; Naletto, Giampiero; Cremonese, Gabriele; Debei, Stefano; Flamini, Enrico

    2017-11-01

    The paper describes the optical design and performance budget of a novel catadioptric instrument chosen as baseline for the Stereo Channel (STC) of the imaging system SIMBIOSYS for the BepiColombo ESA mission to Mercury. The main scientific objective is the 3D global mapping of the entire surface of Mercury with a scale factor of 50 m per pixel at periherm in four different spectral bands. The system consists of two twin cameras looking at +/-20° from nadir and sharing some components, such as the relay element in front of the detector and the detector itself. The field of view of each channel is 4° x 4° with a scale factor of 23''/pixel. The system guarantees good optical performance with Ensquared Energy of the order of 80% in one pixel. For the straylight suppression, an intermediate field stop is foreseen, which gives the possibility to design an efficient baffling system.

  19. Ground-Level Digital Terrain Model (DTM) Construction from Tandem-X InSAR Data and Worldview Stereo-Photogrammetric Images

    NASA Technical Reports Server (NTRS)

    Lee, Seung-Kuk; Fatoyinbo, Temilola; Lagomasino, David; Osmanoglu, Batuhan; Feliciano, Emanuelle

    2016-01-01

    The ground-level digital elevation model (DEM) or digital terrain model (DTM) information are invaluable for environmental modeling, such as water dynamics in forests, canopy height, forest biomass, carbon estimation, etc. We propose to extract the DTM over forested areas from the combination of interferometric complex coherence from single-pass TanDEM-X (TDX) data at HH polarization and Digital Surface Model (DSM) derived from high-resolution WorldView (WV) image pair by means of random volume over ground (RVoG) model. The RVoG model is a widely and successfully used model for polarimetric SAR interferometry (Pol-InSAR) technique for vertical forest structure parameter retrieval [1][2][3][4]. The ground-level DEM have been obtained by complex volume decorrelation in the RVoG model with the DSM using stereo-photogrammetric technique. Finally, the airborne lidar data were used to validate the ground-level DEM and forest canopy height results.

  20. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  1. A multi-modal stereo microscope based on a spatial light modulator.

    PubMed

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  2. Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations from SOHO and STEREO

    NASA Technical Reports Server (NTRS)

    Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji

    2011-01-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.

  3. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  4. High-Resolution Topography of Mercury from Messenger Orbital Stereo Imaging - the Southern Hemisphere Quadrangles

    NASA Astrophysics Data System (ADS)

    Preusker, F.; Oberst, J.; Stark, A.; Burmeister, S.

    2018-04-01

    We produce high-resolution (222 m/grid element) Digital Terrain Models (DTMs) for Mercury using stereo images from the MESSENGER orbital mission. We have developed a scheme to process large numbers, typically more than 6000, images by photogrammetric techniques, which include, multiple image matching, pyramid strategy, and bundle block adjustments. In this paper, we present models for map quadrangles of the southern hemisphere H11, H12, H13, and H14.

  5. 76 FR 22386 - Availability for Exclusive, Non-Exclusive, or Partially-Exclusive Licensing of an Invention...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-21

    ... Partially-Exclusive Licensing of an Invention Concerning the Method and Apparatus for Stereo Imaging AGENCY... ``Method and Apparatus for Stereo Imaging,'' filed on March 11, 2011. The United States Government, as...: The invention relates to a method and apparatus for the generation of macro scale extremely high...

  6. Weakened Magnetization and Onset of Large-scale Turbulence in the Young Solar Wind—Comparisons of Remote Sensing Observations with Simulation

    NASA Astrophysics Data System (ADS)

    Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.

    2018-04-01

    Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.

  7. Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays.

    PubMed

    Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao

    2015-03-09

    Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.

  8. CHAMP (Camera, Handlens, and Microscope Probe)

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  9. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  10. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  11. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  12. The value of magnetoencephalography for seizure-onset zone localization in magnetic resonance imaging-negative partial epilepsy

    PubMed Central

    Bouet, Romain; Delpuech, Claude; Ryvlin, Philippe; Isnard, Jean; Guenot, Marc; Bertrand, Olivier; Hammers, Alexander; Mauguière, François

    2013-01-01

    Surgical treatment of epilepsy is a challenge for patients with non-contributive brain magnetic resonance imaging. However, surgery is feasible if the seizure-onset zone is precisely delineated through intracranial electroencephalography recording. We recently described a method, volumetric imaging of epileptic spikes, to delineate the spiking volume of patients with focal epilepsy using magnetoencephalography. We postulated that the extent of the spiking volume delineated with volumetric imaging of epileptic spikes could predict the localizability of the seizure-onset zone by intracranial electroencephalography investigation and outcome of surgical treatment. Twenty-one patients with non-contributive magnetic resonance imaging findings were included. All patients underwent intracerebral electroencephalography investigation through stereotactically implanted depth electrodes (stereo-electroencephalography) and magnetoencephalography with delineation of the spiking volume using volumetric imaging of epileptic spikes. We evaluated the spatial congruence between the spiking volume determined by magnetoencephalography and the localization of the seizure-onset zone determined by stereo-electroencephalography. We also evaluated the outcome of stereo-electroencephalography and surgical treatment according to the extent of the spiking volume (focal, lateralized but non-focal or non-lateralized). For all patients, we found a spatial overlap between the seizure-onset zone and the spiking volume. For patients with a focal spiking volume, the seizure-onset zone defined by stereo-electroencephalography was clearly localized in all cases and most patients (6/7, 86%) had a good surgical outcome. Conversely, stereo-electroencephalography failed to delineate a seizure-onset zone in 57% of patients with a lateralized spiking volume, and in the two patients with bilateral spiking volume. Four of the 12 patients with non-focal spiking volumes were operated upon, none became seizure-free. As a whole, patients having focal magnetoencephalography results with volumetric imaging of epileptic spikes are good surgical candidates and the implantation strategy should incorporate volumetric imaging of epileptic spikes results. On the contrary, patients with non-focal magnetoencephalography results are less likely to have a localized seizure-onset zone and stereo electroencephalography is not advised unless clear localizing information is provided by other presurgical investigation methods. PMID:24014520

  13. The value of magnetoencephalography for seizure-onset zone localization in magnetic resonance imaging-negative partial epilepsy.

    PubMed

    Jung, Julien; Bouet, Romain; Delpuech, Claude; Ryvlin, Philippe; Isnard, Jean; Guenot, Marc; Bertrand, Olivier; Hammers, Alexander; Mauguière, François

    2013-10-01

    Surgical treatment of epilepsy is a challenge for patients with non-contributive brain magnetic resonance imaging. However, surgery is feasible if the seizure-onset zone is precisely delineated through intracranial electroencephalography recording. We recently described a method, volumetric imaging of epileptic spikes, to delineate the spiking volume of patients with focal epilepsy using magnetoencephalography. We postulated that the extent of the spiking volume delineated with volumetric imaging of epileptic spikes could predict the localizability of the seizure-onset zone by intracranial electroencephalography investigation and outcome of surgical treatment. Twenty-one patients with non-contributive magnetic resonance imaging findings were included. All patients underwent intracerebral electroencephalography investigation through stereotactically implanted depth electrodes (stereo-electroencephalography) and magnetoencephalography with delineation of the spiking volume using volumetric imaging of epileptic spikes. We evaluated the spatial congruence between the spiking volume determined by magnetoencephalography and the localization of the seizure-onset zone determined by stereo-electroencephalography. We also evaluated the outcome of stereo-electroencephalography and surgical treatment according to the extent of the spiking volume (focal, lateralized but non-focal or non-lateralized). For all patients, we found a spatial overlap between the seizure-onset zone and the spiking volume. For patients with a focal spiking volume, the seizure-onset zone defined by stereo-electroencephalography was clearly localized in all cases and most patients (6/7, 86%) had a good surgical outcome. Conversely, stereo-electroencephalography failed to delineate a seizure-onset zone in 57% of patients with a lateralized spiking volume, and in the two patients with bilateral spiking volume. Four of the 12 patients with non-focal spiking volumes were operated upon, none became seizure-free. As a whole, patients having focal magnetoencephalography results with volumetric imaging of epileptic spikes are good surgical candidates and the implantation strategy should incorporate volumetric imaging of epileptic spikes results. On the contrary, patients with non-focal magnetoencephalography results are less likely to have a localized seizure-onset zone and stereo electroencephalography is not advised unless clear localizing information is provided by other presurgical investigation methods.

  14. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  15. Soil Sample Poised at TEGA Door

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image was taken by NASA's Phoenix Mars Lander's Surface Stereo Imager on Sol 11 (June 5, 2008), the eleventh day after landing. It shows the Robotic Arm scoop containing a soil sample poised over the partially open door of the Thermal and Evolved-Gas Analyzer's number four cell, or oven.

    Light-colored clods of material visible toward the scoop's lower edge may be part of the crusted surface material seen previously near the foot of the lander. The material inside the scoop has been slightly brightened in this image.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  17. On Solid Ground

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This view of one of the footpads of NASA's three-legged Phoenix Mars Lander shows a solid surface at the spacecraft's landing site. As the legs touched down on the surface of Mars, they kicked up some loose material on top of the footpad, but overall, the surface is unperturbed.

    Each footpad is about the size of a large dinner plate, measuring 11.5 inches from rim to rim. The base of the footpad is shaped like the bottom of a shallow bowl to provide stability.

    This image was taken by the spacecraft's Surface Stereo Imager shortly after landing, at 17:07 local time on Mars.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Guide to Magellan image interpretation

    NASA Technical Reports Server (NTRS)

    Ford, John P.; Plaut, Jeffrey J.; Weitz, Catherine M.; Farr, Tom G.; Senske, David A.; Stofan, Ellen R.; Michaels, Gregory; Parker, Timothy J.; Fulton, D. (Editor)

    1993-01-01

    An overview of Magellan Mission requirements, radar system characteristics, and methods of data collection is followed by a description of the image data, mosaic formats, areal coverage, resolution, and pixel DN-to-dB conversion. The availability and sources of image data are outlined. Applications of the altimeter data to estimate relief, Fresnel reflectivity, and surface slope, and the radiometer data to derive microwave emissivity are summarized and illustrated in conjunction with corresponding SAR image data. Same-side and opposite-side stereo images provide examples of parallax differences from which to measure relief with a lateral resolution many times greater than that of the altimeter. Basic radar interactions with geologic surfaces are discussed with respect to radar-imaging geometry, surface roughness, backscatter modeling, and dielectric constant. Techniques are described for interpreting the geomorphology and surface properties of surficial features, impact craters, tectonically deformed terrain, and volcanic landforms. The morphologic characteristics that distinguish impact craters from volcanic craters are defined. Criteria for discriminating extensional and compressional origins of tectonic features are discussed. Volcanic edifices, constructs, and lava channels are readily identified from their radar outlines in images. Geologic map units are identified on the basis of surface texture, image brightness, pattern, and morphology. Superposition, cross-cutting relations, and areal distribution of the units serve to elucidate the geologic history.

  19. SU-F-J-140: Using Handheld Stereo Depth Cameras to Extend Medical Imaging for Radiation Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, C; Xing, L; Yu, S

    Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth cameramore » (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.« less

  20. Discriminability limits in spatio-temporal stereo block matching.

    PubMed

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  1. ASTER Waves

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The pattern on the right half of this image of the Bay of Bengal is the result of two opposing wave trains colliding. This ASTER sub-scene, acquired on March 29, 2000, covers an area 18 kilometers (13 miles) wide and 15 kilometers (9 miles) long in three bands of the reflected visible and infrared wavelength region. The visible and near-infrared bands highlight surface waves due to specular reflection of sunlight off of the wave faces.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, California, is the U.S. science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high-resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land surface, as well as black and white stereo images. With revisit time of between 4 and 16 days, ASTER will provide the capability for repeat coverage of changing areas on Earth's surface. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, California, is the U.S. science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high-resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land surface, as well as black and white stereo images. With revisit time of between 4 and 16 days, ASTER will provide the capability for repeat coverage of changing areas on Earth's surface.

    The broad spectral coverage and high spectral resolution of ASTER will provide scientists in numerous disciplines with critical information for surface mapping and monitoring dynamic conditions and temporal change. Examples of applications include monitoring glacial advances and retreats, potentially active volcanoes, thermal pollution, and coral reef degradation; identifying crop stress; determining cloud morphology and physical properties; evaluating wetlands; mapping surface temperature of soils and geology; and measuring surface heat balance.

  2. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  3. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. Stereo imaging and cytocompatibility of a model dental implant surface formed by direct laser fabrication.

    PubMed

    Mangano, Carlo; Raspanti, Mario; Traini, Tonino; Piattelli, Adriano; Sammons, Rachel

    2009-03-01

    Direct laser fabrication (DLF) allows solids with complex geometry to be produced by sintering metal powder particles in a focused laser beam. In this study, 10 Ti6Al4V alloy model dental root implants were obtained by DLF, and surface characterization was carried out using stereo scanning electron microscopy to produce 3D reconstructions. The surfaces were extremely irregular, with approximately 100 microm deep, narrow intercommunicating crevices, shallow depressions and deep, rounded pits of widely variable shape and size, showing ample scope for interlocking with the host bone. Roughness parameters were as follows: R(t), 360.8 microm; R(z), 358.4 microm; R(a), 67.4 microm; and R(q), 78.0 microm. Disc specimens produced by DLF with an identically prepared surface were used for biocompatibility studies with rat calvarial osteoblasts: After 9 days, cells had attached and spread on the DLF surface, spanning across the crevices, and voids. Cell density was similar to that on a commercial rough microtextured surface but lower than on commercial smooth machined and smooth-textured grit-blasted, acid-etched surfaces. Human fibrin clot extension on the DLF surface was slightly improved by inorganic acid etching to increase the microroughness. With further refinements, DLF could be an economical means of manufacturing implants from titanium alloys. (c) 2008 Wiley Periodicals, Inc.

  5. Phoenix Magnetic Properties Experiments Using the Surface Stereo Imager and the MECA Microscopy Station

    NASA Astrophysics Data System (ADS)

    Madsen, M. B.; Drube, L.; Falkenberg, T. V.; Haspang, M. P.; Ellehoj, M.; Leer, K.; Olsen, L. D.; Goetz, W.; Hviid, S. F.; Gunnlaugsson, H. P.; Hecht, M. H.; Parrat, D.; Lemmon, M. T.; Morris, R. V.; Pike, T.; Sykulska, H.; Vijendran, S.; Britt, D.; Staufer, U.; Marshall, J.; Smith, P. H.

    2008-12-01

    Phoenix carries as part of its scientific payload a series of magnetic properties experiments designed to utilize onboard instruments for the investigation of airborne dust, air-fall samples stirred by the retro-rockets of the lander, and sampled surface and sub-surface material from the northern plains of Mars. One of the aims of these experiments on Phoenix is to investigate any possible differences between airborne dust and soils found on the northern plains from similar samples in the equatorial region of Mars. The magnetic properties experiments are designed to control the pattern of dust attracted to or accumulated on the surfaces to enable interpretation of these patterns in terms of certain magnetic properties of the dust forming the patterns. The Surface Stereo Imager (SSI) provides multi-spectral information about dust accumulated on three iSweep targets on the lander instrument deck. The iSweeps utilize built in permanent magnets and 6 different background colors for the dust compared to only 1 for the MER sweep magnet. Simultaneously these iSweep targets are used as in-situ radiometric calibration targets for the SSI. The visible/near-infrared spectra acquired so far are similar to typical Martian dust and soil spectra. Because of the multiple background colors of the iSweeps the effect of the translucence of thin dust layers can be estimated. High resolution images (4 micrometers/px) acquired by the Optical Microscope (OM) showed subtle differences between different soil samples in particle size distribution, color and morphology. Most samples contain (typically 50 micrometer) large, subrounded particles that are substantially magnetic. The colors of these particles range from red, brown to (almost) black. Based on results from the Mars Exploration Rovers, these dark particles are believed to be enriched in magnetite. Occasionally, also very bright, whitish particles were found on the magnet substrates, likely held by cohesion forces to the magnet surface and/or to other (magnetic) particles.

  6. Generation of Digital Surface Models from satellite photogrammetry: the DSM-OPT service of the ESA Geohazards Exploitation Platform (GEP)

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Michéa, David; Malet, Jean-Philippe

    2017-04-01

    The continuously increasing fleet of agile stereo-capable very-high resolution (VHR) optical satellites has facilitated the acquisition of multi-view images of the earth surface. Theoretical revisit times have been reduced to less than one day and the highest spatial resolution which is commercially available amounts now to 30 cm/pixel. Digital Surface Models (DSM) and point clouds computed from such satellite stereo-acquisitions can provide valuable input for studies in geomorphology, tectonics, glaciology, hydrology and urban remote sensing The photogrammetric processing, however, still requires significant expertise, computational resources and costly commercial software. To enable a large Earth Science community (researcher and end-users) to process easily and rapidly VHR multi-view images, the work targets the implementation of a fully automatic satellite-photogrammetry pipeline (i.e DSM-OPT) on the ESA Geohazards Exploitation Platform (GEP). The implemented pipeline is based on the open-source photogrammetry library MicMac [1] and is designed for distributed processing on a cloud-based infrastructure. The service can be employed in pre-defined processing modes (i.e. urban, plain, hilly, and mountainous environments) or in an advanced processing mode (i.e. in which expert-users have the possibility to adapt the processing parameters to their specific applications). Four representative use cases are presented to illustrate the accuracy of the resulting surface models and ortho-images as well as the overall processing time. These use cases consisted of the construction of surface models from series of Pléiades images for four applications: urban analysis (Strasbourg, France), landslide detection in mountainous environments (South French Alps), co-seismic deformation in mountain environments (Central Italy earthquake sequence of 2016) and fault recognition for paleo-tectonic analysis (North-East India). Comparisons of the satellite-derived topography to airborne LiDAR topography are discussed. [1] Rupnik, E., Pierrot Deseilligny, M., Delorme, A., and Klinger, Y.: Refined satellite image orientation in the free open-source photogrammetric tools APERO/MICMAC, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-1, 83-90, doi:10.5194/isprs-annals-III-1-83-2016, 2016.

  7. Opportunity's View After Drive on Sol 1806 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  8. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  9. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.

  10. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    PubMed

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  11. Simultaneous Boundary-Layer Transition, Tip Vortex, and Blade Deformation Measurements of a Rotor in Hover

    NASA Technical Reports Server (NTRS)

    Heineck, James; Schairer, Edward; Ramasamy, Manikandan; Roozeboom, Nettie

    2016-01-01

    This paper describes simultaneous optical measurements of a sub-scale helicopter rotor in the U.S. Army Hover Chamber at NASA Ames Research Center. The measurements included thermal imaging of the rotor blades to detect boundary layer transition; retro-reflective background-oriented schlieren (RBOS) to visualize vortices; and stereo photogrammetry to measure displacements of the rotor blades, to compute spatial coordinates of the vortices from the RBOS data, and to map the thermal imaging data to a three-dimensional surface grid. The test also included an exploratory effort to measure flow near the rotor tip by tomographic particle image velocimetry (tomo PIV)an effort that yielded valuable experience but little data. The thermal imaging was accomplished using an image-derotation method that allowed long integration times without image blur. By mapping the thermal image data to a surface grid it was possible to accurately locate transition in spatial coordinates along the length of the rotor blade.

  12. NASA's Earth Science Use of Commercially Availiable Remote Sensing Datasets: Cover Image

    NASA Technical Reports Server (NTRS)

    Underwood, Lauren W.; Goward, Samuel N.; Fearon, Matthew G.; Fletcher, Rose; Garvin, Jim; Hurtt, George

    2008-01-01

    The cover image incorporates high resolution stereo pairs acquired from the DigitalGlobe(R) QuickBird sensor. It shows a digital elevation model of Meteor Crater, Arizona at approximately 1.3 meter point-spacing. Image analysts used the Leica Photogrammetry Suite to produce the DEM. The outside portion was computed from two QuickBird panchromatic scenes acquired October 2006, while an Optech laser scan dataset was used for the crater s interior elevations. The crater s terrain model and image drape were created in a NASA Constellation Program project focused on simulating lunar surface environments for prototyping and testing lunar surface mission analysis and planning tools. This work exemplifies NASA s Scientific Data Purchase legacy and commercial high resolution imagery applications, as scientists use commercial high resolution data to examine lunar analog Earth landscapes for advanced planning and trade studies for future lunar surface activities. Other applications include landscape dynamics related to volcanism, hydrologic events, climate change, and ice movement.

  13. An Evaluation of ALOS Data in Disaster Applications

    NASA Astrophysics Data System (ADS)

    Igarashi, Tamotsu; Igarashi, Tamotsu; Furuta, Ryoich; Ono, Makoto

    ALOS is the advanced land observing satellite, providing image data from onboard sensors; PRISM, AVNIR-2 and PALSAR. PRISM is the sensor of panchromatic stereo, high resolution three-line-scanner to characterize the earth surface. The accuracy of position in image and height of Digital Surface Model (DSM) are high, therefore the geographic information extraction is improved in the field of disaster applications with providing images of disaster area. Especially pan-sharpened 3D image composed with PRISM and the four-band visible near-infrared radiometer AVNIR-2 data is expected to provide information to understand the geographic and topographic feature. PALSAR is the advanced multi-functional synthetic aperture radar (SAR) operated in L-band, appropriate for the use of land surface feature characterization. PALSAR has many improvements from JERS-1/SAR, such as high sensitivity, having high resolution, polarimetric and scan SAR observation modes. PALSAR is also applicable for SAR interferometry processing. This paper describes the evaluation of ALOS data characteristic from the view point of disaster applications, through some exercise applications.

  14. Martian Surface & Pathfinder Airbags

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image of the Martian surface was taken in the afternoon of Mars Pathfinder's first day on Mars. Taken by the Imager for Mars Pathfinder (IMP camera), the image shows a diversity of rocks strewn in the foreground. A hill is visible in the distance (the notch within the hill is an image artifact). Airbags are seen at the lower right.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

  15. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  16. HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.

    PubMed

    Lin, Huei-Yung; Wang, Min-Liang

    2014-09-04

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.

  17. HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots

    PubMed Central

    Lin, Huei-Yung.; Wang, Min-Liang.

    2014-01-01

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317

  18. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.

  19. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  20. Stereo Imaging Miniature Endoscope

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Manohara, Harish; White, Victor; Shcheglov, Kirill V.; Shahinian, Hrayr

    2011-01-01

    Stereo imaging requires two different perspectives of the same object and, traditionally, a pair of side-by-side cameras would be used but are not feasible for something as tiny as a less than 4-mm-diameter endoscope that could be used for minimally invasive surgeries or geoexploration through tiny fissures or bores. The proposed solution here is to employ a single lens, and a pair of conjugated, multiple-bandpass filters (CMBFs) to separate stereo images. When a CMBF is placed in front of each of the stereo channels, only one wavelength of the visible spectrum that falls within the passbands of the CMBF is transmitted through at a time when illuminated. Because the passbands are conjugated, only one of the two channels will see a particular wavelength. These time-multiplexed images are then mixed and reconstructed to display as stereo images. The basic principle of stereo imaging involves an object that is illuminated at specific wavelengths, and a range of illumination wavelengths is time multiplexed. The light reflected from the object selectively passes through one of the two CMBFs integrated with two pupils separated by a baseline distance, and is focused onto the imaging plane through an objective lens. The passband range of CMBFs and the illumination wavelengths are synchronized such that each of the CMBFs allows transmission of only the alternate illumination wavelength bands. And the transmission bandwidths of CMBFs are complementary to each other, so that when one transmits, the other one blocks. This can be clearly understood if the wavelength bands are divided broadly into red, green, and blue, then the illumination wavelengths contain two bands in red (R1, R2), two bands in green (G1, G2), and two bands in blue (B1, B2). Therefore, when the objective is illuminated by R1, the reflected light enters through only the left-CMBF as the R1 band corresponds to the transmission window of the left CMBF at the left pupil. This is blocked by the right CMBF. The transmitted band is focused on the focal plane array (FPA).

  1. Dig Hazard Assessment Using a Stereo Pair of Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Trebi-Ollennu, Ashitey

    2012-01-01

    This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each stereo pair of mast images.

  2. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.

  3. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  4. Specialized Computer Systems for Environment Visualization

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  5. A long baseline global stereo matching based upon short baseline estimation

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  6. The Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations of the 2011 February 15 CME

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Makela, P.; Yashiro, S.; Davila, J. M.

    2012-08-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009a). The STEREO spacecraft were in qudrature with SOHO (STEREO-A ahead of Earth by 87oand STEREO-B 94obehind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp) and radial speed (Vrad) derived previously from geometrical considerations (Gopalswamy et al. 2009a): Vrad=1/2 (1 + cot w)Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 7 6o, so w=3 8o. This gives the relation as Vrad=1.1 4 Vexp. From LASCO observations, we measured Vexp=897 km/s, so we get the radial speed as 10 2 3 km/s. Direct measurement of radial speed yields 945 km/s (STEREO-A) and 105 8 km/s (STEREO-B). These numbers are different only by 7.6 % and 3.4 % (for STEREO-A and STEREO-B, respectively) from the computed value.

  7. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  8. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.

    PubMed

    Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young

    2016-04-01

    In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.

  9. Automatic Rooftop Extraction in Stereo Imagery Using Distance and Building Shape Regularized Level Set Evolution

    NASA Astrophysics Data System (ADS)

    Tian, J.; Krauß, T.; d'Angelo, P.

    2017-05-01

    Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  10. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  11. Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope

    PubMed Central

    Kumar, Ankur N.; Miga, Michael I.; Pheiffer, Thomas S.; Chambless, Lola B.; Thompson, Reid C.; Dawant, Benoit M.

    2014-01-01

    One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient’s preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (~1 hour) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81mm range on the phantom object and in the 0.54-1.35mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient’s preoperative images and facilitate active surgical guidance. PMID:25189364

  12. Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope.

    PubMed

    Kumar, Ankur N; Miga, Michael I; Pheiffer, Thomas S; Chambless, Lola B; Thompson, Reid C; Dawant, Benoit M

    2015-01-01

    One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient's preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient's preoperative images and facilitate active surgical guidance. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Stereoscopic observations from meteorological satellites

    NASA Astrophysics Data System (ADS)

    Hasler, A. F.; Mack, R.; Negri, A.

    The capability of making stereoscopic observations of clouds from meteorological satellites is a new basic analysis tool with a broad spectrum of applications. Stereoscopic observations from satellites were first made using the early vidicon tube weather satellites (e.g., Ondrejka and Conover [1]). However, the only high quality meteorological stereoscopy from low orbit has been done from Apollo and Skylab, (e.g., Shenk et al. [2] and Black [3], [4]). Stereoscopy from geosynchronous satellites was proposed by Shenk [5] and Bristor and Pichel [6] in 1974 which allowed Minzner et al. [7] to demonstrate the first quantitative cloud height analysis. In 1978 Bryson [8] and desJardins [9] independently developed digital processing techniques to remap stereo images which made possible precision height measurement and spectacular display of stereograms (Hasler et al. [10], and Hasler [11]). In 1980 the Japanese Geosynchronous Satellite (GMS) and the U.S. GOES-West satellite were synchronized to obtain stereo over the central Pacific as described by Fujita and Dodge [12] and in this paper. Recently the authors have remapped images from a Low Earth Orbiter (LEO) to the coordinate system of a Geosynchronous Earth Orbiter (GEO) and obtained stereoscopic cloud height measurements which promise to have quality comparable to previous all GEO stereo. It has also been determined that the north-south imaging scan rate of some GEOs can be slowed or reversed. Therefore the feasibility of obtaining stereoscopic observations world wide from combinations of operational GEO and LEO satellites has been demonstrated. Stereoscopy from satellites has many advantages over infrared techniques for the observation of cloud structure because it depends only on basic geometric relationships. Digital remapping of GEO and LEO satellite images is imperative for precision stereo height measurement and high quality displays because of the curvature of the earth and the large angular separation of the two satellites. A general solution for accurate height computation depends on precise navigation of the two satellites. Validation of the geosynchronous satellite stereo using high altitude mountain lakes and vertically pointing aircraft lidar leads to a height accuracy estimate of +/- 500 m for typical clouds which have been studied. Applications of the satellite stereo include: 1) cloud top and base height measurements, 2) cloud-wind height assignment, 3) vertical motion estimates for convective clouds (Mack et al. [13], [14]), 4) temperature vs. height measurements when stereo is used together with infrared observations and 5) cloud emissivity measurements when stereo, infrared and temperature sounding are used together (see Szejwach et al. [15]). When true satellite stereo image pairs are not available, synthetic stereo may be generated. The combination of multispectral satellite data using computer produced stereo image pairs is a dramatic example of synthetic stereoscopic display. The classic case uses the combination of infrared and visible data as first demonstrated by Pichel et al. [16]. Hasler et at. [17], Mosher and Young [18] and Lorenz [19], have expanded this concept to display many channels of data from various radiometers as well as real and simulated data fields. A future system of stereoscopic satellites would be comprised of both low orbiters (as suggested by Lorenz and Schmidt [20], [19]) and a global system of geosynchronous satellites. The low earth orbiters would provide stereo coverage day and night and include the poles. An optimum global system of stereoscopic geosynchronous satellites would require international standarization of scan rate and direction, and scan times (synchronization) and resolution of at least 1 km in all imaging channels. A stereoscopic satellite system as suggested here would make an extremely important contribution to the understanding and prediction of the atmosphere.

  14. Panoramic 3d Vision on the ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.

    The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w.r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance of radiometric & spectral response in various spectral bands, fusion between HRC and WAC, super resolution, and again the requirement of quick yet robust processing will add some complexity to the ground processing system. • Visualization for Operations Planning: Efficient operations planning is directly related to an ergonomic and well performing visualization. It is intended to adapt existing tools to an integrated visualization solution for the purpose of scientific site characterization, view planning and reachability mapping/instrument placement of pointing sensors (including the panoramic imaging system itself), and selection of regions of interest. The main interfaces between the individual components as well as the first version of a user requirement document are currently under definition. Beside the support for sensor layout and calibration the 3d vision system will consist of 2-3 main modules to be used during ground processing & utilization of the ExoMars Rover panoramic imaging system. 3

  15. Lunar Cartography: Progress in the 2000S and Prospects for the 2010S

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Archinal, B. A.; Gaddis, L. R.; Rosiek, M. R.

    2012-08-01

    The first decade of the 21st century has seen a new golden age of lunar exploration, with more missions than in any decade since the 1960's and many more nations participating than at any time in the past. We have previously summarized the history of lunar mapping and described the lunar missions planned for the 2000's (Kirk et al., 2006, 2007, 2008). Here we report on the outcome of lunar missions of this decade, the data gathered, the cartographic work accomplished and what remains to be done, and what is known about mission plans for the coming decade. Four missions of lunar orbital reconnaissance were launched and completed in the decade 2001-2010: SMART-1 (European Space Agency), SELENE/Kaguya (Japan), Chang'e-1 (China), and Chandrayaan-1 (India). In addition, the Lunar Reconnaissance Orbiter or LRO (USA) is in an extended mission, and Chang'e-2 (China) operated in lunar orbit in 2010-2011. All these spacecraft have incorporated cameras capable of providing basic data for lunar mapping, and all but SMART-1 carried laser altimeters. Chang'e-1, Chang'e-2, Kaguya, and Chandrayaan-1 carried pushbroom stereo cameras intended for stereo mapping at scales of 120, 10, 10, and 5 m/pixel respectively, and LRO is obtaining global stereo imaging at 100 m/pixel with its Wide Angle Camera (WAC) and hundreds of targeted stereo observations at 0.5 m/pixel with its Narrow Angle Camera (NAC). Chandrayaan-1 and LRO carried polarimetric synthetic aperture radars capable of 75 m/pixel and (LRO only) 7.5 m/pixel imaging even in shadowed areas, and most missions carried spectrometers and imaging spectrometers whose lower resolution data are urgently in need of coregistration with other datasets and correction for topographic and illumination effects. The volume of data obtained is staggering. As one example, the LRO laser altimeter, LOLA, has so far made more than 5.5 billion elevation measurements, and the LRO Camera (LROC) system has returned more than 1.3 million archived image products comprising over 220 Terabytes of image data. The processing of controlled map products from these data is as yet relatively limited. A substantial portion of the LOLA altimetry data have been subjected to a global crossover analysis, and local crossover analyses of Chang'e-1 LAM altimetry have also been performed. LRO NAC stereo digital topographic models (DTMs) and orthomosaics of numerous sites of interest have been prepared based on control to LOLA data, and production of controlled mosaics and DTMs from Mini-RF radar images has begun. Many useful datasets (e.g., DTMs from LRO WAC images and Kaguya Terrain Camera images) are currently uncontrolled. Making controlled, orthorectified map products is obviously a high priority for lunar cartography, and scientific use of the vast multinational set of lunar data now available will be most productive if all observations can be integrated into a single reference frame. To achieve this goal, the key steps required are (a) joint registration and reconciliation of the laser altimeter data from multiple missions, in order to provide the best current reference frame for other products; (b) registration of image datasets (including spectral images and radar, as well as monoscopic and stereo optical images) to one another and the topographic surface from altimetry by bundle adjustment; (c) derivation of higher density topographic models than the altimetry provides, based on the stereo images registered to the altimetric data; and (d) orthorectification and mosaicking of the various datasets based on the dense and consistent topographic model resulting from the previous steps. In the final step, the dense and consistent topographic data will be especially useful for correcting spectrophotometric observations to facilitate mapping of geologic and mineralogic features. We emphasize that, as desirable as short term progress may seem, making mosaics before controlling observations, and controlling observations before a single coordinate reference frame is agreed upon by all participants, are counterproductive and will result in a collection of map products that do not align with one another and thus will not be fully usable for correlative scientific studies. Only a few lunar orbital missions performing remote sensing are projected for the decade 2011-2020. These include the possible further extension of the LRO mission; NASA's GRAIL mission, which is making precise measurements of the lunar gravity field that will likely improve the cartographic accuracy of data from other missions, and the Chandrayaan-2/Luna Resurs mission planned by India and Russia, which includes an orbital remote sensing component. A larger number of surface missions are being discussed for the current decade, including the lander/rover component of Chandrayaan-2/Luna Resurs, Chang'e-3 (China), SELENE-2 (Japan), and privately funded missions inspired by the Google Lunar X-Prize. The US Lunar Precursor Robotic Program was discontinued in 2010, leaving NASA with no immediate plans for robotic or human exploration of the lunar surface, though the MoonRise sample return mission might be reproposed in the future. If the cadence of missions cannot be continued, the desired sequel to the decade of lunar mapping missions 2001-2010 should be a decade of detailed and increasingly multinational analysis of lunar data from 2011 onward.

  16. Development Of A Flash X-Ray Scanner For Stereoradiography And CT

    NASA Astrophysics Data System (ADS)

    Endorf, Robert J.; DiBianca, Frank A.; Fritsch, Daniel S.; Liu, Wen-Ching; Burns, Charles B.

    1989-05-01

    We are developing a flash x-ray scanner for stereoradiography and CT which will be able to produce a stereoradiograph in 30 to 70 ns and a complete CT scan in one microsecond. This type of imaging device will be valuable in studying high speed processes, high acceleration, and traumatic events. We have built a two channel flash x-ray system capable of producing stereo radiographs with stereo angles of from 15 to 165 degrees. The dynamic and static Miff 's for the flash x-ray system were measured and compared with similar MIT's measured for a conventional medical x-ray system. We have written and tested a stereo reconstruction algorithm to determine three dimensional space points from corresponding points in the two stereo images. To demonstrate the ability of the system to image traumatic events, a radiograph was obtained of a bone undergoing a fracture. The effects of accelerations of up to 600 g were examined on radiographs taken of human kidney tissue samples in a rapidly rotating centrifuge. Feasibility studies of CT reconstruction have been performed by making simulated Cr images of various phantoms for larger flash x-ray systems of from 8 to 29 flash x-ray tubes.

  17. Quantitative evaluation of three advanced laparoscopic viewing technologies: a stereo endoscope, an image projection display, and a TFT display.

    PubMed

    Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A

    2002-08-01

    Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.

  18. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  19. CMEs in the Heliosphere: I. A Statistical Analysis of the Observational Properties of CMEs Detected in the Heliosphere from 2007 to 2017 by STEREO/HI-1

    NASA Astrophysics Data System (ADS)

    Harrison, R. A.; Davies, J. A.; Barnes, D.; Byrne, J. P.; Perry, C. H.; Bothmer, V.; Eastwood, J. P.; Gallagher, P. T.; Kilpua, E. K. J.; Möstl, C.; Rodriguez, L.; Rouillard, A. P.; Odstrčil, D.

    2018-05-01

    We present a statistical analysis of coronal mass ejections (CMEs) imaged by the Heliospheric Imager (HI) instruments on board NASA's twin-spacecraft STEREO mission between April 2007 and August 2017 for STEREO-A and between April 2007 and September 2014 for STEREO-B. The analysis exploits a catalogue that was generated within the FP7 HELCATS project. Here, we focus on the observational characteristics of CMEs imaged in the heliosphere by the inner (HI-1) cameras, while following papers will present analyses of CME propagation through the entire HI fields of view. More specifically, in this paper we present distributions of the basic observational parameters - namely occurrence frequency, central position angle (PA) and PA span - derived from nearly 2000 detections of CMEs in the heliosphere by HI-1 on STEREO-A or STEREO-B from the minimum between Solar Cycles 23 and 24 to the maximum of Cycle 24; STEREO-A analysis includes a further 158 CME detections from the descending phase of Cycle 24, by which time communication with STEREO-B had been lost. We compare heliospheric CME characteristics with properties of CMEs observed at coronal altitudes, and with sunspot number. As expected, heliospheric CME rates correlate with sunspot number, and are not inconsistent with coronal rates once instrumental factors/differences in cataloguing philosophy are considered. As well as being more abundant, heliospheric CMEs, like their coronal counterparts, tend to be wider during solar maximum. Our results confirm previous coronagraph analyses suggesting that CME launch sites do not simply migrate to higher latitudes with increasing solar activity. At solar minimum, CMEs tend to be launched from equatorial latitudes, while at maximum, CMEs appear to be launched over a much wider latitude range; this has implications for understanding the CME/solar source association. Our analysis provides some supporting evidence for the systematic dragging of CMEs to lower latitude as they propagate outwards.

  20. Investigation of Terrain Analysis and Classification Methods for Ground Vehicles

    DTIC Science & Technology

    2012-08-27

    exteroceptive terrain classifier takes exteroceptive sensor data (here, color stereo images of the terrain) as its input and returns terrain class...Mishkin & Laubach, 2006), the rover cannot safely travel beyond the distance it can image with its cameras, which has been as little as 15 meters or...field of view roughly 44°×30°, capturing pairs of color images at 640×480 pixels each (Videre Design, 2001). Range data were extracted from the stereo

  1. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  2. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  3. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  4. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  5. Topomapping of Mars with HRSC images, ISIS, and a commercial stereo workstation

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Galuszka, D.; Redding, B.; Hare, T. M.

    HRSC on Mars Express [1] is the first camera designed specifically for stereo imaging to be used in mapping a planet other than the Earth. Nine detectors view the planet through a single lens to obtain four-band color coverage and stereo images at 3 to 5 distinct angles in a single pass over the target. The short interval between acquisition of the images ensures that changes that could interfere with stereo matching are minimized. The resolution of the nadir channel is 12.5 m at periapsis, poorer at higher points in the elliptical orbit. The stereo channels are typically operated at 2x coarser resolution and the color channels at 4x or 8x. Since the commencement of operations in January 2004, approximately 58% of Mars has been imaged at nadir resolutions better than 50 m/pixel. This coverage is expected to increase significantly during the recently approved extended mission of Mars Express, giving the HRSC dataset enormous potential for regional and even global mapping. Systematic processing of the HRSC images is carried out at the German Aerospace Center (DLR) in Berlin. Preliminary digital topographic models (DTMs) at 200 m/post resolution and orthorectified image products are produced in near-realtime for all orbits, by using the VICAR software system [2]. The tradeoff of universal coverage but limited DTM resolution makes these products optimal for many but not all research studies. Experiments on adaptive processing with the same software, for a limited number of orbits, have allowed DTMs of higher resolution (down to 50 m/post) to be produced [3]. In addition, numerous Co-Investigators on the HRSC team (including ourselves) are actively researching techniques to improve on the standard products, by such methods as bundle adjustment, alternate approaches to stereo DTM generation, and refinement of DTMs by photoclinometry (shape-from-shading) [4]. The HRSC team is conducting a systematic comparison of these alternative processing approaches by arranging for team members to produce DTMs in a consistent coordinate system from a carefully chosen suite of test images [5]. Here, we describe our own approach to HRSC processing and the results we obtained with the test images. We have developed an independent capability for processing of HRSC images at the USGS, based on the approach previously taken with Mars Global Surveyor Mars Orbiter Camera (MGS MOC) images [6]. The chosen approach uses both the USGS digital cartographic system ISIS and the commercial photogrammetric software SOCET SET ( R BAE Systems) and exploits the strengths of each. This capability provides 1 an independent point of comparison for the standard processing, as described here. It also prepares us for systematic mapping with HRSC data, if desired, and makes some useful processing tools (including relatively powerful photometric normalization and photoclinometry software) available to a wide community of ISIS users. ISIS [7] provides an end-to-end system for the analysis of digital images and production of maps from them that is readily extended to new missions. Its stereo capabilities are, however, limited. SOCET SET [8] is tailored to aerial and Earth-orbital imagery but provides a complete workflow with modules for bundle adjustment (MST), automatic stereomatching (ATE), and interactive quality control/editing of DTMs with stereo viewing (ITE). Our processing approach for MOC and other stereo datasets has been to use ISIS to ingest images in an archival format, decompress them as necessary, and perform instrument-specific radiometric calibration. Software written in ISIS is used to translate the image and, more importantly, orientation parameters and other metadata, to the formats understood by SOCET SET. The commercial system is then used for "three-dimensional" processing: bundle-adjustment (including measurement of needed control points), DTM generation, and DTM editing. Final steps such as orthrectification and mosaicking of images can be performed either in SOCET SET or in ISIS after exporting the DTM data back to it. This workflow was modified slightly for HRSC to take advantage of the standard processing performed at the DLR. As the first step in DTM production, we import VICAR Level 2 files (radiometrically calibrated but still in the raw camera geometry) into ISIS where they can immediately be used or exported to SOCET SET. HRSC Level 3 and 4 products (DTMs and orthorectified images) can also be imported and used as map-projected data (e.g., Level 4 DTMs from DLR can be compared with those produced in SOCET SET). Our results for images from orbit h1235 (covering western Candor Chasma) and the adjacent orbits h0894, h0905, h0927 (Nanedi Valles), are encouraging even though we were unable to take full advantage of the multiple-line design of HRSC in the analysis. The version of SOCET SET used (5.2) does not allow for the introduction of constraints in the bundle adjustment to ensure that the images from a single HRSC orbit share the same trajectory and pointing history. We therefore computed offsets to the trajectory and pointing angles for each image of the set as if they were fully independent Furthermore, a limitation of the existing SOCET (and ISIS) pushbroom scanner sensor models is that the exposure time per line is taken as constant for each image. HRSC is generally operated so that the line time changes multiple times per orbit, requiring us to split each VICAR image into multiple files for processing. Because the segments of each image could not be constrained to have consistent adjustments, the DTM of Nanedi Valles produced from these image segments contained small discontinuities at the segment boundaries. This problem did not arise for Candor Chasma 2 because the entire study area was covered without changes in the time per image line. The latest release of SOCET SET (5.3) incorporates the ability to do constrained bundle adjustment and should solve these problems. In addition, we are modifying the ISIS and SOCET sensor models to allow changes of line time within an image. This will greatly reduce the effort needed to work with HRSC image sets with frequent line time changes (i.e., the vast majority), because we will no longer have to split them into short segments that must be controlled and processed individually. In addition, a bug in recent and current versions of SOCET SET prevents the capability for multi-way image matching from being used with sets of scanner images. We therefore collected separate DTMs by pairwise matching of each combination of images (nadir-stereo1, nadir-stereo2, stereo1-stereo2) within an orbit and merged the results. The bug will be corrected in a future release of SOCET SET, making multi-way matching possible. This is expected to improve the robustness of DTM generation and reduce the need for interactive editing. The Candor Chasma bundle adjustment yielded RMS two-dimensional residuals of 0.5 to 0.7 pixels in most bands, 1.4 pixels in the blue. RMS residuals to the ground control provided by Mars Orbiter Laser Altimeter (MOLA) data were ˜180 m horizontally but only 15 m vertically. Adjustments to the spacecraft orientation were surprisingly large, and may be correlated: 0.1 to 2.4 km in position, ≤0.3° in omega, ≤0.8° in the other two angles. Placement of the (manually selected) control points was found to be critical; matching MOLA to the images to constrain horizontal coordinates is easiest at slope breaks such as the canyon edges, but vertical constraints are best obtained in areas of low slope. As a result, it is preferable to choose separate points for horizontal and vertical control. It is also useful to import the MOLA ground tracks into SOCET SET in order to be sure of picking control points on or near altimetry profiles rather than in gaps where the MOLA DTM has been filled by interpolation. We collected DTMs at 75 m/post in the interior of Candor Chasma and 300 m on the walls and surrounding plateau, and merged the results from both spacings and all 3 image combinations at 75 m/post. For Nanedi Valles, which lacks the extremely steep or flat areas encountered in Candor, DTMs at both spacings were collected over the full study area. A small amount of interactive editing was performed to remove areas of obvious matcher errors from the individual DTMs before they were merged. In most cases, this resulted in the combined DTM being based on the other, more successful matching results. Parts of the plateau around Candor Chasma, which has very little image texture, could not be matched successfully and were filled with MOLA data. As would be expected, the resulting DTM appears sharper than either MOLA at 463 m/post or the preliminary HRSC DTM at 200 m/post. The added detail is subjectively well correlated with the image but is not as sharp at the 75 m (˜3 pixel) grid spacing. 3 With the DTM and orthorectified images translated back into ISIS format, a variety of useful additional processing steps could be demonstrated, such as generation of pan-sharpened true and false color images, color-albedo maps, and band-ratio images with correction for surface and atmospheric photometric effects. Similar processing of the nadir and stereo panchromatic images, which have phase angles ranging from 17° to 48°, reveals a surprising diversity of surface photometric behavior. Maps of phase- dependence of scattering will not only be useful for empirical classification of surface units and quantitative modeling of microtexture and other photometric parameters, they are also likely to be essential for the rigorous comparison of the color images, which span a comparable range of phase angles. Finally, by dividing the nadir image by a smoothed version of the albedo map, we were able to obtain an image in which all but the most localized albedo variations had been removed. The albedo-corrected image was then analyzed by two-dimensional photoclinometry [9] to generate a DTM that contains real geomorphic detail at the limit of image resolution while retaining consistency with the stereo and MOLA data over longer distances. Because photoclinometry serves merely as a form of "smart interpolation" to fill in local details in the stereo DTM, the complications that can arise in the general case [10] do not occur, and this processing can be carried out unsupervised. We note in conclusion that orthorectification of the images, photometric normalization and modeling, and photoclinometry are all performed with the free software system ISIS. At the moment, the commercial software SOCET SET is required for both bundle adjustment and stereo DTM production. The USGS is currently developing its own bundle adjustment software for HRSC and other line scanners, which, when available, will make it possible for ISIS users to control HRSC images to MOLA and therefore to use the altimetric topography in subsequent processing and analysis steps similar to those described here. Acknowledgement: For this study, the HRSC Experiment Team of the German Aerospace Center (DLR) in Berlin has provided HRSC Preliminary 200m DTM(s). References: [1] Neukum, G., et al. (2004) Nature, 432, 971. [2] Scholten, F., et al. (2005) PE&RS, 71, 1143. [3] Gwinner, K., et al. (2005) PFG, 5, 387. [4] Albertz, J., et al. (2005) PE&RS, 71, 1153. [5] Heipke, C., et al. (2006) IAPRS, submitted. [6] Kirk, R.L., et al. (2003) JGR, 108, 8088. [7] Eliason, E. (1997) LPS XXVIII, 331; Gaddis et al. (1997) LPS XXVIII, 387; Torson, J., and K. Becker, (1997) LPS XXVIII, 1443. [8] Miller, S.B., and A.S. Walker (1993) ACSM/ASPRS Annual Conv., 3, 256; S.B., and A.S. Walker (1995) Z. Phot. Fern. 63, 4. [9] Kirk, R.L. (1987) Ph.D. Thesis, Caltech, Part III. [10] Kirk, R.L., et al. (2003) ISPRS-ET Workshop, http://astrogeology.usgs.gov/Projects/ISPRS/Meetings/Houston2003/abstracts/ Kirk_isprs_mar03.pdf. 4

  6. Large Prominence Eruption [video

    NASA Image and Video Library

    2014-10-07

    The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs. Credit: NASA/Goddard/STEREO The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs.

  7. Large Prominence Eruption (October 3, 2014)

    NASA Image and Video Library

    2017-12-08

    The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs. Credit: NASA/Goddard/STEREO The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs.

  8. Automatic analysis of stereoscopic satellite image pairs for determination of cloud-top height and structure

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.

    1991-01-01

    Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.

  9. Time-to-impact sensors in robot vision applications based on the near-sensor image processing concept

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-03-01

    Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.

  10. Animation of 'Dodo' and 'Goldilocks' Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    A pan and zoom animation of the informally named 'Dodo' (on left) and 'Goldilocks' (on right) trenches as seen by the Surface Stereo Imager (SSI) aboard NASA's Phoenix Mars Lander. This animation was based on conditions on the Martian surface on Sol 17 (June 11, 2008), the 17th Martian day of the mission. 'Baby Bear' is the name of the sample taken from 'Goldilocks' and delivered to the Thermal and Evolved-Gas Analyzer (TEGA) instrument.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  11. The Particle Habit Imaging and Polar Scattering probe PHIPS: First Stereo-Imaging and Polar Scattering Function Measurements of Ice Particles

    NASA Astrophysics Data System (ADS)

    Abdelmonem, A.; Schnaiter, M.; Schön, R.; Leisner, T.

    2009-04-01

    Cirrus clouds impact climate by their influence on the water vapour distribution in the upper troposphere. Moreover, they directly affect the radiative balance of the Earth's atmosphere by the scattering of incoming solar radiation and the absorption of outgoing thermal emission. The link between the microphysical properties of ice cloud particles and the radiative forcing of the clouds is not as yet well understood and the influence of the shapes of ice crystals on the radiative budget of cirrus clouds is currently under debate. PHIPS is a new experimental device for the stereo-imaging of individual cloud particles and the simultaneous measurement of the polar scattering function of the same particle. PHIPS uses an automated particle event triggering system that ensures that only those particles are captured which are located in the field of view - depth of field volume of the microscope unit. Efforts were made to improve the resolution power of the microscope unit down to about 3 µm and to facilitate a 3D morphology impression of the ice crystals. This is realised by a stereo-imaging set up composed of two identical microscopes which image the same particle under an angular viewing distance of 30°. The scattering part of PHIPS enables the measurement of the polar light scattering function of cloud particles with an angular resolution of 1° for forward scattering directions (from 1° to 10°) and 8° for side and backscattering directions (from 18° to 170°). For each particle the light scattering pulse per channel is stored either as integrated intensity or as time resolved intensity function which opens a new category of data analysis concerning details of the particle movement. PHIPS is the first step to PHIPS-HALO which is one of the in situ ice particle and water vapour instruments that are currently under development for the new German research aircraft HALO. The instrument was tested in the ice cloud characterisation campaign HALO-02 which was conducted in December 2008 at the AIDA cloud chamber in the temperature range from -5°C to -70°C. In a series of experiments small externally generated seed ice crystals were grown in AIDA at distinct temperature and saturation ratio conditions. For these experiments the long known ice morphology diagram with the temperature dependent morphology changes and the supersaturation dependent structural complexity could clearly be reproduced by PHIPS. Structural details like hollow crystals, crystals with inclusions, and crystals with stepped surfaces (Hopper crystals) could be resolved by PHIPS. Moreover, the advantage of stereo-imaging in terms of habit classification and particle orientation deduction could be demonstrated. The scattering function measurement reveals ice particle orientation dependent specular reflection peaks which might contain information about the surface roughness. The presentation will describe the instrument set up in detail and highlight some preliminary results.

  12. Considerations for the Use of STEREO -HI Data for Astronomical Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tappin, S. J., E-mail: james.tappin@stfc.ac.uk

    Recent refinements to the photometric calibrations of the Heliospheric Imagers (HI) on board the Solar TErrestrial RElations Observatory ( STEREO ) have revealed a number of subtle effects in the measurement of stellar signals with those instruments. These effects need to be considered in the interpretation of STEREO -HI data for astronomy. In this paper we present an analysis of these effects and how to compensate for them when using STEREO -HI data for astronomical studies. We determine how saturation of the HI CCD detectors affects the apparent count rates of stars after the on-board summing of pixels and exposures.more » Single-exposure calibration images are analyzed and compared with binned and summed science images to determine the influence of saturation on the science images. We also analyze how the on-board cosmic-ray scrubbing algorithm affects stellar images. We determine how this interacts with the variations of instrument pointing to affect measurements of stars. We find that saturation is a significant effect only for the brightest stars, and that its onset is gradual. We also find that degraded pointing stability, whether of the entire spacecraft or of the imagers, leads to reduced stellar count rates and also increased variation thereof through interaction with the on-board cosmic-ray scrubbing algorithm. We suggest ways in which these effects can be mitigated for astronomical studies and also suggest how the situation can be improved for future imagers.« less

  13. Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions

    PubMed Central

    Friedman, Ariell; Pizarro, Oscar; Williams, Stefan B.; Johnson-Roberson, Matthew

    2012-01-01

    This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements. PMID:23251370

  14. Magma rheology from 3D geometry of martian lava flows

    NASA Astrophysics Data System (ADS)

    Allemand, P.; Deschamps, A.; Lesaout, M.; Delacourt, C.; Quantin, C.; Clenet, H.

    2012-04-01

    Volcanism is an important geologic agent which has been recently active at the surface of Mars. The composition of individual lava flows is difficult to infer from spectroscopic data because of the absence of crystallized minerals and the possible cover of the flows by dust. The 3D geometry of lava flows provides an interesting alternative to infer the chemical composition of lavas and effusion rates. Indeed, chemical composition exerts a strong control on the viscosity and yield strength of the magma and global geometry of lava flow reflects its emplacement rate. Until recently, these studies where realized from 2D data. The third dimension, which is a key parameter, was deduced or supposed from local shadow measurements on MGS Themis IR images with an uncertainty of more than 500%. Recent CTX data (MRO mission) allow to compute Digital Elevation Model at a resolution of 1 or 2 pixels (5 to 10 m) with the help of Isis and the Ames Stereo Pipeline pipe line. The CTX images are first transformed in format readable by Isis. The external geometric parameters of the CTX camera are computed and added to the image header with Isis. During a correlation phase, the homologous pixels are searched on the pair of stereo images. Finally, the DEM is computed from the position of the homologous pixels and the geometrical parameters of the CTX camera. Twenty DEM have been computed from stereo images showing lava flows of various ages on the region of Cerberus, Elyseum, Daedalia and Amazonis planitia. The 3D parameters of the lava flows have been measured on the DEMs and tested against shadows measurement. These 3D parameters have been inverted to estimate the viscosity and the yield strength of the flow. The effusion rate has also been estimated. These parameters have been compared to those of similar lava flows of the East Pacific rise.

  15. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  16. Determination of Cloud Base Height, Wind Velocity, and Short-Range Cloud Structure Using Multiple Sky Imagers Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Schwartz, Stephen E.; Yu, Dantong

    Clouds are a central focus of the U.S. Department of Energy (DOE)’s Atmospheric System Research (ASR) program and Atmospheric Radiation Measurement (ARM) Climate Research Facility, and more broadly are the subject of much investigation because of their important effects on atmospheric radiation and, through feedbacks, on climate sensitivity. Significant progress has been made by moving from a vertically pointing (“soda-straw”) to a three-dimensional (3D) view of clouds by investing in scanning cloud radars through the American Recovery and Reinvestment Act of 2009. Yet, because of the physical nature of radars, there are key gaps in ARM's cloud observational capabilities. Formore » example, cloud radars often fail to detect small shallow cumulus and thin cirrus clouds that are nonetheless radiatively important. Furthermore, it takes five to twenty minutes for a cloud radar to complete a 3D volume scan and clouds can evolve substantially during this period. Ground-based stereo-imaging is a promising technique to complement existing ARM cloud observation capabilities. It enables the estimation of cloud coverage, height, horizontal motion, morphology, and spatial arrangement over an extended area of up to 30 by 30 km at refresh rates greater than 1 Hz (Peng et al. 2015). With fine spatial and temporal resolution of modern sky cameras, the stereo-imaging technique allows for the tracking of a small cumulus cloud or a thin cirrus cloud that cannot be detected by a cloud radar. With support from the DOE SunShot Initiative, the Principal Investigator (PI)’s team at Brookhaven National Laboratory (BNL) has developed some initial capability for cloud tracking using multiple distinctly located hemispheric cameras (Peng et al. 2015). To validate the ground-based cloud stereo-imaging technique, the cloud stereo-imaging field campaign was conducted at the ARM Facility’s Southern Great Plains (SGP) site in Oklahoma from July 15 to December 24. As shown in Figure 1, the cloud stereo-imaging system consisted of two inexpensive high-definition (HD) hemispheric cameras (each cost less than $1,500) and ARM’s Total Sky Imager (TSI). Together with other co-located ARM instrumentation, the campaign provides a promising opportunity to validate stereo-imaging-based cloud base height and, more importantly, to examine the feasibility of cloud thickness retrieval for low-view-angle clouds.« less

  17. Effects of thermal deformation on optical instruments for space application

    NASA Astrophysics Data System (ADS)

    Segato, E.; Da Deppo, V.; Debei, S.; Cremonese, G.

    2017-11-01

    Optical instruments for space missions work in hostile environment, it's thus necessary to accurately study the effects of ambient parameters variations on the equipment. In particular optical instruments are very sensitive to ambient conditions, especially temperature. This variable can cause dilatations and misalignments of the optical elements, and can also lead to rise of dangerous stresses in the optics. Their displacements and the deformations degrade the quality of the sampled images. In this work a method for studying the effects of the temperature variations on the performance of imaging instrument is presented. The optics and their mountings are modeled and processed by a thermo-mechanical Finite Element Model (FEM) analysis, then the output data, which describe the deformations of the optical element surfaces, are elaborated using an ad hoc MATLAB routine: a non-linear least square optimization algorithm is adopted to determine the surface equations (plane, spherical, nth polynomial) which best fit the data. The obtained mathematical surface representations are then directly imported into ZEMAX for sequential raytracing analysis. The results are the variations of the Spot Diagrams, of the MTF curves and of the Diffraction Ensquared Energy due to simulated thermal loads. This method has been successfully applied to the Stereo Camera for the BepiColombo mission reproducing expected operative conditions. The results help to design and compare different optical housing systems for a feasible solution and show that it is preferable to use kinematic constraints on prisms and lenses to minimize the variation of the optical performance of the Stereo Camera.

  18. Quantifying ice cliff contribution to debris-covered glacier mass balance from multiple sensors

    NASA Astrophysics Data System (ADS)

    Brun, Fanny; Wagnon, Patrick; Berthier, Etienne; Kraaijenbrink, Philip; Immerzeel, Walter; Shea, Joseph; Vincent, Christian

    2017-04-01

    Ice cliffs on debris-covered glaciers have been recognized as a hot spot for glacier melt. Ice cliffs are steep (even sometimes overhanging) and fast evolving surface features, which make them challenging to monitor. We surveyed the topography of Changri Nup Glacier (Nepalese Himalayas, Everest region) in November 2015 and 2016 using multiple sensors: terrestrial photogrammetry, Unmanned Aerial Vehicle (UAV) photogrammetry, Pléiades stereo images and ASTER stereo images. We derived 3D point clouds and digital elevation models (DEMs) following a Structure-from-Motion (SfM) workflow for the first two sets of data to monitor surface elevation changes and calculate the associated volume loss. We derived only DEMs for the two last data sets. The derived DEMs had resolutions ranging from < 5 cm to 30 m. The derived point clouds and DEMs are used to quantify the ice melt of the cliffs at different scales. The very high resolution SfM point clouds, together with the surface velocity field, will be used to calculate the volume losses of 14 individual cliffs, depending on their size, aspect or the presence of supra glacial lake. Then we will extend this analysis to the whole glacier to quantify the contribution of ice cliff melt to the overall glacier mass balance, calculated with the UAV and Pléiades DEMs. This research will provide important tools to evaluate the role of ice cliffs in regional mass loss.

  19. Developing stereo image based robot control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suprijadi,; Pambudi, I. R.; Woran, M.

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less

  20. Learning about the Dynamic Sun through Sounds

    NASA Astrophysics Data System (ADS)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  1. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  2. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  3. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  4. High Resolution Stereo Camera (HRSC) on Mars Express - a decade of PR/EO activities at Freie Universität Berlin

    NASA Astrophysics Data System (ADS)

    Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian

    2014-05-01

    Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC camera experiment. Press and media appearances of group members, and talks to school classes and interested communities also contribute to the public outreach. For HRSC data dissemination we use digital platforms. Since 2007 HRSC image data can be viewed and accessed via the online interface HRSCview [http://hrscview.fu-berlin.de] which was built in cooperation with the DLR Institute for Planetary Research. Additionally HRSC ortho images (level 4) are presented in a modern MapServer setup in GIS-read format since 2013 [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/projects/marsexpress/level4downloads/index.html]. All of these offers ensured the accessibility of HRSC data and products to the science community as well as to the general public for the last ten years and will do so also in the future, taking advantage of modern and user-optimized applications and networks.

  5. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  6. False Color Terrain Model of Phoenix Workspace

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a terrain model of Phoenix's Robotic Arm workspace. It has been color coded by depth with a lander model for context. The model has been derived using images from the depth perception feature from Phoenix's Surface Stereo Imager (SSI). Red indicates low-lying areas that appear to be troughs. Blue indicates higher areas that appear to be polygons.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Terrain at Landing Site

    NASA Image and Video Library

    1997-07-05

    Portions of Mars Pathfinder's deflated airbags (seen in the foreground), a large rock in mid-field, and a hill in the background were taken by the Imager for Mars Pathfinder (IMP) aboard Mars Pathfinder during the spacecraft's first day on the Red Planet. Pathfinder successfully landed on Mars at 10:07 a.m. PDT earlier today. The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per "eye." It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters. http://photojournal.jpl.nasa.gov/catalog/PIA00615

  8. Consequences of Incorrect Focus Cues in Stereo Displays

    PubMed Central

    Banks, Martin S.; Akeley, Kurt; Hoffman, David M.; Girshick, Ahna R.

    2010-01-01

    Conventional stereo displays produce images in which focus cues – blur and accommodation – are inconsistent with the simulated depth. We have developed new display techniques that allow the presentation of nearly correct focus. Using these techniques, we find that stereo vision is faster and more accurate when focus cues are mostly consistent with simulated depth; furthermore, viewers experience less fatigue when focus cues are correct or nearly correct. PMID:20523910

  9. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    PubMed

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.

  10. An experimental comparison of standard stereo matching algorithms applied to cloud top height estimation from satellite IR images

    NASA Astrophysics Data System (ADS)

    Anzalone, Anna; Isgrò, Francesco

    2016-10-01

    The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.

  11. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  12. Image-size differences worsen stereopsis independent of eye position

    PubMed Central

    Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.

    2010-01-01

    With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927

  13. Segmentation via fusion of edge and needle map

    NASA Astrophysics Data System (ADS)

    Ahn, Hong-Young; Tou, Julius T.

    1991-03-01

    This paper presents an integrated image segmentation method using edge and needle map which compensates deficiencies of using either edge-based approach or region-based approach. Segmentation of an image is the first and most difficult step toward symbolic transformation of a raw image, which is essential in image understanding. In industrial applications, the task is further complicated by the ubiquitous presence of specularity in most industrial parts. Three images taken from three different illumination directions were used to separate specular and Lambertian components in the images. Needle map is generated from Lambertian component images using photometric stereo technique. In one channel, edges are extracted and linked from the averaged Lambertian images providing one source of segmentation. The other channel, Gaussian curvature and mean curvature values are estimated at each pixel from least square local surface fit of needle map. Labeled surface type image is then generated using the signs of Gaussian and mean curvatures, where one of ten surface types is assigned to each pixel. Connected regions of identical surface type pixels provide the first level grouping, a rough initial segmentation. Edge information and initial segmentation of surface type are fed to an integration module which interprets the edges and regions in a consistent way. During interpretation regions are merged or split, edges are discarded or generated depending upon global surface fit error and consistency with neighboring regions. The output of integrated segmentation is an explicit description of surface type and contours of each region which facilitates recognition, localization and attitude determination of objects in the image.

  14. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: One year on with a focus on auto-DTM, auto-coregistration and citizen science.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Yershov, Vladimir; Gwinner, Klaus; van Gasselt, Stephan; Walter, Sebastian; Ivanov, Anton; Morley, Jeremy; Sprinks, James; Houghton, Robert; Bamford, Stephen; Kim, Jung-Rack

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as impact craters, RSLs, CO2 geysers, gullies, boulder movements and a host of ice-related phenomena). Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 98% coverage with images ≤100m and more than 70% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. Recently [Gwinner et al., 2015] have shown the ability to generate mosaiced DTM and BRDF-corrected surface reflectance maps. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈25cm) and ≈5% from CTX (≈6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing changes in time series. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing ("Big Data") solution has been developed to generate the best possible multi-resolution DTM of Mars co-registered to the DLR HRSC (50-100m grid) products with those from CTX (6-20m grid, loc.cit.) and HiRISE (1-3m grids) on a large-scale linux cluster based at MSSL with 224 cores and 0.25 Pb of storage. The HRSC products are employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points. Results of this automated co-registration and subsequent automated DTM will be shown. The metadata already available for all orbital imagery acquired to date, with poor georeferencing information, has been employed to determine the "sweet spots" which have long time series of measurements with different spatial resolution ranges over the last ≈50 years of observations and these will be shown. Starting in July 2015, as much of the entire NASA and ESA record of orbital images will be co-registered and the updated georeferencing information employed to generate a time series of terrain relief corrected orthorectified images (ORIs) back to 1977. Web-GIS using OGC protocols will be employed to allow exploration visually of changes of the surface. An example of this will be shown for the latest DLR HRSC DTMs at 100m and BRDF-corrected surface reflectance at 1km. Data mining processing algorithms are being developed to search for changes in the Martian surface from 1971-2015 and the output of this data mining will be compared against the results from citizen scientists' measurements in a specialised Zooniverse implementation. The results of an analysis of existing citizen science projects and lessons learnt for iMars will be shown. Final co-registered data sets will be distributed through both European and US channels in a manner to be decided towards the end of the project. The resultant co-registered image datasets will represent the best possible capture of changes and evolutions in the Martian surface. A workshop is planned to be held during the EPSC time period to demonstrate the first science results on these different types of changes based on initial results . Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n˚ 607379. Partial support is also provided from the STFC "MSSL Consolidated Grant" ST/K000977/1. References: Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; Gwinner, K., F. et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). This conference.

  15. Three-dimensional displays and stereo vision

    PubMed Central

    Westheimer, Gerald

    2011-01-01

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023

  16. Phoenix Animation Looking North

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This animation is a series of images, taken by NASA's Phoenix Mars Lander's Surface Stereo Imager, combined into a panoramic view looking north from the lander. The area depicted is beyond the immediate workspace of the lander and shows a system of polygons and troughs that connect with the ones Phoenix will be investigating in depth.

    The images were taken on sol 14 (June 8, 2008) or the 14th Martian day after landing.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  17. Phoenix Telltale Movement

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This is an animation of a camera pushing through NASA's Phoenix Mars Lander's Stereo Surface Imager (SSI). At the conclusion of the animation is a set of SSI images of the telltale taken on the first, second, and third days of the mission, or sols 1, 2, and 3 (May 26, 27, and 28, 2008). The last set of images were taken one minute apart and shows the telltale moving in the wind.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Phoenix Again Carries Soil to Wet Chemistry Lab

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the lander's Robotic Arm scoop positioned over the Wet Chemistry Lab Cell 1 delivery funnel on Sol 41, the 42nd Martian day after landing, or July 6, 2008, after a soil sample was delivered to the instrument.

    The instrument's Cell 1 is second one from the foreground of the image. The first cell, Cell 0, received a soil sample two weeks earlier.

    This image has been enhanced to brighten the scene.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  20. Investigation of small scale roughness properties of Martian terrains using Mars Reconnaissance Orbiter data.

    NASA Astrophysics Data System (ADS)

    Ivanov, A. B.; Rossi, A.

    2009-04-01

    Studies of layered terrains in polar regions as well as inside craters and other areas on Mars often require knowledge of local topography at much finer resolution than global MOLA topography allows. For example, in the polar layered deposits spatial relationships are important to understand unconformities that are observed on the edges of the layered terrains [15,3]. Their formation process is not understood at this point, yet fine scale topography, joint with ground penetrating radar like SHARAD and MARSIS may shed light on their 3D structure. Landing site analysis also requires knowledge of local slopes and roughness at scales from 1 to 10 m [1,2]. Mars Orbiter Camera [13] has taken stereo images at these scales, however interpretation was difficult due to unstable behavior of the Mars Global Surveyor spacecraft during image take (wobbling effect). Mars Reconnaissance Orbiter (MRO) is much better stabilized, since it is required for optimal operation of its high resolution camera. In this work we have utilized data from MRO sensors (CTX camera [11] and HIRISE camera [12] in order to derive digital elevation models (DEM) from images targeted as stereo pairs. We employed methods and approaches utilized for the Mars Orbiter Camera (MOC) stereo data [4,5]. CTX data varies in resolution and stereo pairs analyzed in this work can be derived at approximately 10m scale. HIRISE images allow DEM post spacing at around 1 meter. The latter are very big images and our computer infrastructure was only able to process either reduced resolution images, covering larger surface or working with smaller patches at the original resolution. We employed stereo matching technique described in [5,9], in conjunction with radiometric and geometric image processing in ISIS3 [16]. This technique is capable of deriving tiepoint co-registration at subpixel precision and has proven itself when used for Pathfinder and MER operations [8]. Considerable part of this work was to accommodate CTX and HIRISE image processing in the existing data processing pipeline and improve it at the same time. Currently the workflow is not finished: DEM units are relative and are not in elevation. We have been able to derive successful DEMs from CTX data Becquerel [14] and Crommelin craters as well as for some areas in the North Polar Layered Terrain. Due to its tremendous resolution HIRISE data showing great surface detail, hence allowing better correlation than other sensors considered in this work. In all cases DEM were showing considerable potential for exploration of terrain characteristics. Next steps include cross validation results with DEM produced by other teams and sensors (HRSC [6], HIRISE [7]) and providing elevation in terms of absolute height over a MOLA areoid. MRO imaging data allows us an unprecedented look at Martian terrain. This work provides a step forward derivation of DEM from HIRISE and CTX datasets and currently undergoing validation vs. other existing datasets. We will present our latest results for layering structures in both North and South Polar Layered deposits as well as layered structures inside Becquerel and Crommelin craters. Digital Elevation models derived from the CTX sensor can also be utilized effectively as a input for clutter reduction models, which are in turn used for the ground penetrating SHARAD instrument [13]. References. [1] R. Arvidson, et al. Mars exploration program 2007 phoenix landing site selection and characteristics. Journal of Geophysical Research-Planets, 113, JUN 19 2008. [2] M. Golombek, et al. Assessment of mars exploration rover landing site predictions. Nature, 436(7047):44-48, JUL 7 2005. [3] K. E. Herkenhoff, et al. Meter-scale morphology of the north polar region of mars. Science, 317(5845):1711-1715, SEP 21 2007. [4] A. B. Ivanov. Ten-Meter Scale Topography and Roughness of Mars Exploration Rovers Landing Sites and Martian Polar Regions. volume 34 of Lunar and Planetary Inst. Technical Report, pages 2084-+, Mar. 2003. [5] A. B. Ivanov and J. J. Lorre. Analysis of Mars Orbiter Camera Stereo Pairs. In Lunar and Planetary Institute Conference Abstracts, volume 33 of Lunar and Planetary Inst. Technical Report, pages 1845-+, Mar. 2002. [6] R. Jaumann, et al. The high-resolution stereo camera (HRSC) experiment on mars express: Instrument aspects and experiment conduct from interplanetary cruise through the nominal mission. Planetary and Space Science, 55(7-8):928-952, MAY 2007. [7] R. L. Kirk, et al. Ultrahigh resolution topographic mapping of mars with MRO HIRISE stereo images: Meter-scale slopes of candidate phoenix landing sites. Journal of Geophysical Research-Planets, 113, NOV 15 2008. [8] S. Lavoie, et al. Processing and analysis of mars pathfinder science data at the jet propulsion laboratory's science data processing systems section. Journal of Geophysical Research-Planets, 104(E4):8831-8852, APR 25 1999. [9] J. J. Lorre, et al. Recent developments at JPL in the application of image processing to astronomy. In D. L. Crawford, editor, Instrumentation in Astronomy III, volume 172 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages 394-402, 1979. [10] M. Malin, et al. Early views of the martian surface from the mars orbiter camera of mars global surveyor. Science, 279(5357):1681-1685, MAR 13 1998. [11] M. C. Malin,et al. Context camera investigation on board the mars reconnaissance orbiter. Journal of Geophysical Research-Planets, 112(E5), MAY 18 2007. [12] A. S. McEwen, et al.. Mars reconnaissance orbiter's high resolution imaging science experiment (HIRISE). Journal of Geophysical Research-Planets, 112(E5), MAY 17 2007. [13] A. Rossi, et al. Multi-spacecraft synergy with MEX HRSC and MRO SHARAD: Light-Toned Deposits in crater bulges. AGU Fall Meeting Abstracts, pages B1371+, Dec. 2008. [14] A. P. Rossi, et al. Stratigraphic architecture and structural control on sediment emplacement in Becquerel crater (Mars). volume 40. Lunar and Planetary Science Institute, 2009. [15] K. L. Tanaka,et al. North polar region of mars: Advances in stratigraphy, structure, and erosional modification, AUG 2008. Icarus. [16] USGS. Planetary image processing software: ISIS3. http://isis.astrogeology.usgs.gov/

  1. Automated determination of cup-to-disc ratio for classification of glaucomatous and normal eyes on stereo retinal fundus images

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Nakagawa, Toshiaki; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi

    2011-09-01

    Early diagnosis of glaucoma, which is the second leading cause of blindness in the world, can halt or slow the progression of the disease. We propose an automated method for analyzing the optic disc and measuring the cup-to-disc ratio (CDR) on stereo retinal fundus images to improve ophthalmologists' diagnostic efficiency and potentially reduce the variation on the CDR measurement. The method was developed using 80 retinal fundus image pairs, including 25 glaucomatous, and 55 nonglaucomatous eyes, obtained at our institution. A disc region was segmented using the active contour method with the brightness and edge information. The segmentation of a cup region was performed using a depth map of the optic disc, which was reconstructed on the basis of the stereo disparity. The CDRs were measured and compared with those determined using the manual segmentation results by an expert ophthalmologist. The method was applied to a new database which consisted of 98 stereo image pairs including 60 and 30 pairs with and without signs of glaucoma, respectively. Using the CDRs, an area under the receiver operating characteristic curve of 0.90 was obtained for classification of the glaucomatous and nonglaucomatous eyes. The result indicates potential usefulness of the automated determination of CDRs for the diagnosis of glaucoma.

  2. Real-time handling of existing content sources on a multi-layer display

    NASA Astrophysics Data System (ADS)

    Singh, Darryl S. K.; Shin, Jung

    2013-03-01

    A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time

  3. Sounds of space: listening to the Sun-Earth connection

    NASA Astrophysics Data System (ADS)

    Craig, N.; Mendez, B.; Luhmann, J.; Sircar, I.

    2003-04-01

    NASA's STEREO/IMPACT Mission includes an Education and Public Outreach component that seeks to offer national programs for broad audiences highlighting the mission's solar and geo-space research. In an effort to make observations of the Sun more accessible and exciting for a general audience, we look for alternative ways to represent the data. Scientists most often represent data visually in images, graphs, and movies. However, any data can also be represented as sound audible to the human ear, a process known as sonification. We will present our plans for an exciting prototype program that converts the science results of solar energetic particle data to sound. We plan to make sounds, imagery, and data available to the public through the World Wide Web where they may create their own sonifications, as well as integrate this effort to a science museum kiosk format. The kiosk station would include information on the STEREO mission and monitors showing images of the Sun from each of STEREO's two satellites. Our goal is to incorporate 3D goggles and a headset into the kiosk, allowing visitors to see the current or archived images in 3D and hear stereo sounds resulting from sonification of the corresponding data. Ultimately, we hope to collaborate with composers and create musical works inspired by these sounds and related solar images.

  4. A high-resolution three-dimensional far-infrared thermal and true-color imaging system for medical applications.

    PubMed

    Cheng, Victor S; Bai, Jinfen; Chen, Yazhu

    2009-11-01

    As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.

  5. A Heuristic Approach to Remove the Background Intensity on White-light Solar Images. I. STEREO /HI-1 Heliospheric Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenborg, Guillermo; Howard, Russell A.

    White-light coronal and heliospheric imagers observe scattering of photospheric light from both dust particles (the F-Corona) and free electrons in the corona (the K-corona). The separation of the two coronae is thus vitally important to reveal the faint K-coronal structures (e.g., streamers, co-rotating interaction regions, coronal mass ejections, etc.). However, the separation of the two coronae is very difficult, so we are content in defining a background corona that contains the F- and as little K- as possible. For both the LASCO-C2 and LASCO-C3 coronagraphs aboard the Solar and Heliospheric Observatory ( SOHO ) and the white-light imagers of themore » SECCHI suite aboard the Solar Terrestrial Relationships Observatory ( STEREO ), a time-dependent model of the background corona is generated from about a month of similar images. The creation of such models is possible because the missions carrying these instruments are orbiting the Sun at about 1 au. However, the orbit profiles for the upcoming Solar Orbiter and Solar Probe Plus missions are very different. These missions will have elliptic orbits with a rapidly changing radial distance, hence invalidating the techniques in use for the SOHO /LASCO and STEREO /SECCHI instruments. We have been investigating techniques to generate background models out of just single images that could be used for the Solar Orbiter Heliospheric Imager and the Wide-field Imager for the Solar Probe Plus packages on board the respective spacecraft. In this paper, we introduce a state-of-the-art, heuristic technique to create the background intensity models of STEREO /HI-1 data based solely on individual images, report on new results derived from its application, and discuss its relevance to instrumental and operational issues.« less

  6. First Dodo Trench with White Layer Visible in Dig Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    These color images were taken by NASA's Phoenix Mars Lander's Stereo Surface Imager on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The images of the trench shows a white layer that has been uncovered by the Robotic Arm (RA) scoop and is now visible in the wall of the trench. This trench was the first one dug by the RA to understand the Martian soil and plan the digging strategy.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  8. Arctic Landscape Within Reach

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image, one of the first captured by NASA's Phoenix Mars Lander, shows flat ground strewn with tiny pebbles and marked by small-scale polygonal cracking, a pattern seen widely in Martian high latitudes and also observed in permafrost terrains on Earth. The polygonal cracking is believed to have resulted from seasonal contraction and expansion of surface ice.

    Phoenix touched down on the Red Planet at 4:53 p.m. Pacific Time (7:53 p.m. Eastern Time), May 25, 2008, in an arctic region called Vastitas Borealis, at 68 degrees north latitude, 234 degrees east longitude.

    This image was acquired at the Phoenix landing site by the Surface Stereo Imager on day 1 of the mission on the surface of Mars, or Sol 0, after the May 25, 2008, landing.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  9. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  10. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  11. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  12. MARS PATHFINDER CAMERA TEST IN SAEF-2

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In the Spacecraft Assembly and Encapsulation Facility-2 (SAEF-2), workers from the Jet Propulsion Laboratory (JPL) are conducting a systems test of the imager for the Mars Pathfinder. Mounted on the Pathfinder lander, the imager (the white cylindrical element the worker is touching) is a specially designed camera featuring a stereo-imaging system with color capability provided by a set of selectable filters. It is mounted on an extendable mast that will pop up after the lander touches down on the Martian surface. The imager will transmit images of the terrain, allowing engineers back on Earth to survey the landing site before the Pathfinder rover is deployed to explore the area. The Mars Pathfinder is scheduled for launch aboard a Delta II expendable launch vehicle on Dec. 2. JPL manages the Pathfinder project for NASA.

  13. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  14. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  15. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  16. Longest time series of glacier mass changes in the Himalaya based on stereo imagery

    NASA Astrophysics Data System (ADS)

    Bolch, T.; Pieczonka, T.; Benn, D. I.

    2010-12-01

    Mass loss of Himalayan glaciers has wide-ranging consequences such as declining water resources, sea level rise and an increasing risk of glacial lake outburst floods (GLOFs). The assessment of the regional and global impact of glacier changes in the Himalaya is, however, hampered by a lack of mass balance data for most of the range. Multi-temporal digital terrain models (DTMs) allow glacier mass balance to be calculated since the availability of stereo imagery. Here we present the longest time series of mass changes in the Himalaya and show the high value of early stereo spy imagery such as Corona (years 1962 and 1970) aerial images and recent high resolution satellite data (Cartosat-1) to calculate a time series of glacier changes south of Mt. Everest, Nepal. We reveal that the glaciers are significantly losing mass with an increasing rate since at least ~1970, despite thick debris cover. The specific mass loss is 0.32 ± 0.08 m w.e. a-1, however, not higher than the global average. The spatial patterns of surface lowering can be explained by variations in debris-cover thickness, glacier velocity, and ice melt due to exposed ice cliffs and ponds.

  17. Surface topography characterization using 3D stereoscopic reconstruction of SEM images

    NASA Astrophysics Data System (ADS)

    Vedantha Krishna, Amogh; Flys, Olena; Reddy, Vijeth V.; Rosén, B. G.

    2018-06-01

    A major drawback of the optical microscope is its limitation to resolve finer details. Many microscopes have been developed to overcome the limitations set by the diffraction of visible light. The scanning electron microscope (SEM) is one such alternative: it uses electrons for imaging, which have much smaller wavelength than photons. As a result high magnification with superior image resolution can be achieved. However, SEM generates 2D images which provide limited data for surface measurements and analysis. Often many research areas require the knowledge of 3D structures as they contribute to a comprehensive understanding of microstructure by allowing effective measurements and qualitative visualization of the samples under study. For this reason, stereo photogrammetry technique is employed to convert SEM images into 3D measurable data. This paper aims to utilize a stereoscopic reconstruction technique as a reliable method for characterization of surface topography. Reconstructed results from SEM images are compared with coherence scanning interferometer (CSI) results obtained by measuring a roughness reference standard sample. This paper presents a method to select the most robust/consistent surface texture parameters that are insensitive to the uncertainties involved in the reconstruction technique itself. Results from the two-stereoscopic reconstruction algorithms are also documented in this paper.

  18. Deepest Trenching at Phoenix Site on Mars

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's Phoenix Mars Lander widened the deepest trench it has excavated, dubbed 'Stone Soup,' (in the lower half of this image) to collect a sample from about 18 centimeters (7 inches) below the surface for analysis by the lander's wet chemistry laboratory.

    Phoenix's Surface Stereo Imager took this image on Sol 95 (Aug. 30, 2008), the 95th Martian day since landing. For scale, the rock to the right of the Stone Soup trench is about 15 centimeters (6 inches) across. The lander's robotic arm scooped up a sample from the left half of the trench for delivery the following sol to the wet chemistry laboratory.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. 'Snow White' Trench

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image was acquired by NASA's Phoenix Mars Lander's Surface Stereo Imager on Sol 43, the 43rd Martian day after landing (July 8, 2008). This image shows the trench informally called 'Snow White.'

    Two samples were delivered to the Wet Chemistry Laboratory, which is part of Phoenix's Microscopy, Electrochemistry, and Conductivity Analyzer (MECA). The first sample was taken from the surface area just left of the trench and informally named 'Rosy Red.' It was delivered to the Wet Chemistry Laboratory on Sol 30 (June 25, 2008). The second sample, informally named 'Sorceress,' was taken from the center of the 'Snow White' trench and delivered to the Wet Chemistry Laboratory on Sol 41 (July 6, 2008).

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  20. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  1. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  2. Cloud photogrammetry with dense stereo for fisheye cameras

    NASA Astrophysics Data System (ADS)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  3. Topography and geomorphology of the Huygens landing site on Titan

    USGS Publications Warehouse

    Soderblom, L.A.; Tomasko, M.G.; Archinal, B.A.; Becker, T.L.; Bushroe, M.W.; Cook, D.A.; Doose, L.R.; Galuszka, D.M.; Hare, T.M.; Howington-Kraus, E.; Karkoschka, E.; Kirk, R.L.; Lunine, J.I.; McFarlane, E.A.; Redding, B.L.; Rizk, B.; Rosiek, M.R.; See, C.; Smith, P.H.

    2007-01-01

    The Descent Imager/Spectral Radiometer (DISR) aboard the Huygens Probe took several hundred visible-light images with its three cameras on approach to the surface of Titan. Several sets of stereo image pairs were collected during the descent. The digital terrain models constructed from those images show rugged topography, in places approaching the angle of repose, adjacent to flatter darker plains. Brighter regions north of the landing site display two styles of drainage patterns: (1) bright highlands with rough topography and deeply incised branching dendritic drainage networks (up to fourth order) with dark-floored valleys that are suggestive of erosion by methane rainfall and (2) short, stubby low-order drainages that follow linear fault patterns forming canyon-like features suggestive of methane spring-sapping. The topographic data show that the bright highland terrains are extremely rugged; slopes of order of 30?? appear common. These systems drain into adjacent relatively flat, dark lowland terrains. A stereo model for part of the dark plains region to the east of the landing site suggests surface scour across this plain flowing from west to east leaving ???100-m-high bright ridges. Tectonic patterns are evident in (1) controlling the rectilinear, low-order, stubby drainages and (2) the "coastline" at the highland-lowland boundary with numerous straight and angular margins. In addition to flow from the highlands drainages, the lowland area shows evidence for more prolific flow parallel to the highland-lowland boundary leaving bright outliers resembling terrestrial sandbars. This implies major west to east floods across the plains where the probe landed with flow parallel to the highland-lowland boundary; the primary source of these flows is evidently not the dendritic channels in the bright highlands to the north. ?? 2007 Elsevier Ltd. All rights reserved.

  4. New Insights on Subsurface Imaging of Carbon Nanotubes in Polymer Composites via Scanning Electron Microscopy

    NASA Technical Reports Server (NTRS)

    Zhao, Minhua; Ming, Bin; Kim, Jae-Woo; Gibbons, Luke J.; Gu, Xiaohong; Nguyen, Tinh; Park, Cheol; Lillehei, Peter T.; Villarrubia, J. S.; Vladar, Andras E.; hide

    2015-01-01

    Despite many studies of subsurface imaging of carbon nanotube (CNT)-polymer composites via scanning electron microscopy (SEM), significant controversy exists concerning the imaging depth and contrast mechanisms. We studied CNT-polyimide composites and, by threedimensional reconstructions of captured stereo-pair images, determined that the maximum SEM imaging depth was typically hundreds of nanometers. The contrast mechanisms were investigated over a broad range of beam accelerating voltages from 0.3 to 30 kV, and ascribed to modulation by embedded CNTs of the effective secondary electron (SE) emission yield at the polymer surface. This modulation of the SE yield is due to non-uniform surface potential distribution resulting from current flows due to leakage and electron beam induced current. The importance of an external electric field on SEM subsurface imaging was also demonstrated. The insights gained from this study can be generally applied to SEM nondestructive subsurface imaging of conducting nanostructures embedded in dielectric matrices such as graphene-polymer composites, silicon-based single electron transistors, high resolution SEM overlay metrology or e-beam lithography, and have significant implications in nanotechnology.

  5. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  6. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  7. Reachability Maps for In Situ Operations

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Leger, Patrick C.; Robinson, Matthew L.; Bonitz, Robert G.

    2013-01-01

    This work covers two programs that accomplish the same goal: creation of a "reachability map" from stereo imagery that tells where operators of a robotic arm can reach or touch the surface, and with which instruments. The programs are "marsreach" (for MER) and "phxreach." These programs make use of the planetary image geometry (PIG) library. However, unlike the other programs, they are not multi-mission. Because of the complexity of arm kinematics, the programs are specific to each mission.

  8. CHAMP - Camera, Handlens, and Microscope Probe

    NASA Technical Reports Server (NTRS)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  9. Collaborative learning using Internet2 and remote collections of stereo dissection images.

    PubMed

    Dev, Parvati; Srivastava, Sakti; Senger, Steven

    2006-04-01

    We have investigated collaborative learning of anatomy over Internet2, using an application called remote stereo viewer (RSV). This application offers a unique method of teaching anatomy, using high-resolution stereoscopic images, in a client-server architecture. Rotated sequences of stereo image pairs were produced by volumetric rendering of the Visible female and by dissecting and photographing a cadaveric hand. A client-server application (RSV) was created to provide access to these image sets, using a highly interactive interface. The RSV system was used to provide a "virtual anatomy" session for students in the Stanford Medical School Gross Anatomy course. The RSV application allows both independent and collaborative modes of viewing. The most appealing aspects of the RSV application were the capacity for stereoscopic viewing and the potential to access the content remotely within a flexible temporal framework. The RSV technology, used over Internet2, thus serves as an effective complement to traditional methods of teaching gross anatomy. (c) 2006 Wiley-Liss, Inc.

  10. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  11. The STEREO Mission: A New Approach to Space Weather Research

    NASA Technical Reports Server (NTRS)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  12. The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team

    2002-12-01

    The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.

  13. Topographic map of the Parana Valles region of Mars MTM 500k -25/337E OMKT

    USGS Publications Warehouse

    ,

    2003-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. MTM 500k –25/347E OMKT: Abbreviation for Mars Transverse Mercator; 1:500,000 series; center of sheet latitude 25° S., longitude 347.5° E. in planetocentric coordinate system (this corresponds to –25/012; latitude 25° S., longitude 12.5° W. in planetographic coordinate system); orthophotomosaic (OM) with color coded (K) topographic contours and nomenclature (T) [Greeley and Batson, 1990]. The figure of Mars used for the computation of the map projection is an oblate spheroid (flattening of 1/176.875) with an equatorial radius of 3396.0 km and a polar radius of 3376.8 km (Kirk and others, 2000). The datum (the 0-km contour line) for elevations is defined as the equipotential surface (gravitational plus rotational) whose average value at the equator is equal to the mean radius as determined by Mars Orbiter Laser Altimeter (Smith and others, 2001). The image base for this map employs Viking Orbiter images from orbit 651. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models. Integrated Software for Imagers and Spectrometers (ISIS) (Torson and Becker, 1997) provided the software to project the orthophotomosaic into the Transverse Mercator Projection.

  14. Topographic Map of the Northwest Loire Valles Region of Mars MTM 500k -15/337E OMKT

    USGS Publications Warehouse

    ,

    2003-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. MTM 500k –15/337E OMKT: Abbreviation for Mars Transverse Mercator; 1:500,000 series; center of sheet latitude 15° S., longitude 337.5° E. in planetocentric coordinate system (this corresponds to –15/022; latitude 15° S., longitude 22.5° W. in planetographic coordinate system); orthophotomosaic (OM) with color coded (K) topographic contours and nomenclature (T) [Greeley and Batson, 1990]. The figure of Mars used for the computation of the map projection is an oblate spheroid (flattening of 1/176.875) with an equatorial radius of 3396.0 km and a polar radius of 3376.8 km (Kirk and others, 2000). The datum (the 0–km contour line) for elevations is defined as the equipotential surface (gravitational plus rotational) whose average value at the equator is equal to the mean radius as determined by Mars Orbiter Laser Altimeter (Smith and others, 2001). The image base for this map employs Viking Orbiter images from orbit 651. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models. Integrated Software for Imagers and Spectrometers (ISIS) (Torson and Becker, 1997) provided the software to project the orthophotomosaic into the Transverse Mercator Projection.

  15. Z-Earth: 4D topography from space combining short-baseline stereo and lidar

    NASA Astrophysics Data System (ADS)

    Dewez, T. J.; Akkari, H.; Kaab, A. M.; Lamare, M. L.; Doyon, G.; Costeraste, J.

    2013-12-01

    The advent of free-of-charge global topographic data sets SRTM and Aster GDEM have enabled testing a host of geoscience hypotheses. Availability of such data is now considered standard, and though resolved at 30-m to 90-m pixel size, they are today regarded as obsolete and inappropriate given the regularly updated sub-meter imagery coming through web services like Google Earth. Two features will thus help meet the current topographic data needs of the Geoscience communities: field-scale-compatible elevation datasets (i.e. meter-scale digital models and sub-meter elevation precision) and provision for regularly updated topography to tackle earth surface changes in 4D, while retaining the key for success: data availability at no charge. A new space borne instrumental concept called Z-Earth has undergone phase 0 study at CNES, the French space agency to fulfill these aims. The scientific communities backing this proposal are that of natural hazards, glaciology and biomass. The system under study combines a short-baseline native stereo imager and a lidar profiler. This combination provides spatially resolved elevation swaths together with absolute along-track elevation control point profiles. Acquisition is designed for revisit time better than a year. Intended products not only target single pass digital surface models, color orthoimages and small footprint full-wave-form lidar profiles to update existing topographic coverage, but also time series of them. 3D change detection targets centimetre-scale horizontal precision and metric vertical precision, in complement of -now traditional- spectral change detection. To assess the actual concept value, two real-size experiments were carried out. We used sub-meter-scale Pleiades panchromatic stereo-images to generate digital surface models and check them against dense airborne lidar coverages, one heliborne set purposely flown in Corsica (50-100pts/sq.m) and a second one retrieved from OpenTopography.org (~10pts/sq.m.). In Corsica, over a challenging 45-degree-grade tree-covered mountain side, the Pleiades 2-m-grid-posting digital surface model described the topography with a median error of -4.75m +/-2.59m (NMAD). A planimetric bias between both datasets was found to be about 7m to the South. This planimetric misregistration, though well within Pleiades specifications, partly explains the dramatic effect on elevation difference. In the Redmond area (eastern Oregon), a very gentle desert landscape, elevation differences also contained a vertical median bias of -4.02m+/-1.22m (NMAD). Though here, sub-pixel planimetric registration between stereo DSM and lidar coverage was enforced. This real-size experiment hints that sub-meter accuracy for 2-m-grid-posting DSM is an achievable goal when combining stereoimaging and lidar.

  16. Viking High-Resolution Topography and Mars '01 Site Selection: Application to the White Rock Area

    NASA Astrophysics Data System (ADS)

    Tanaka, K. L.; Kirk, Randolph L.; Mackinnon, D. J.; Howington-Kraus, E.

    1999-06-01

    Definition of the local topography of the Mars '01 Lander site is crucial for assessment of lander safety and rover trafficability. According to Golombek et al., steep surface slopes may (1) cause retro-rockets to be fired too early or late for a safe landing, (2) the landing site slope needs to be < 1deg to ensure lander stability, and (3) a nearly level site is better for power generation of both the lander and the rover and for rover trafficability. Presently available datasets are largely inadequate to determine surface slope at scales pertinent to landing-site issues. Ideally, a topographic model of the entire landing site at meter-scale resolution would permit the best assessment of the pertinent topographic issues. MOLA data, while providing highly accurate vertical measurements, are inadequate to address slopes along paths of less than several hundred meters, because of along-track data spacings of hundreds of meters and horizontal errors in positioning of 500 to 2000 m. The capability to produce stereotopography from MOC image pairs is not yet in hand, nor can we necessarily expect a suitable number of stereo image pairs to be acquired. However, for a limited number of sites, high-resolution Viking stereo imaging is available at tens of meters horizontal resolution, capable of covering landing-ellipse sized areas. Although we would not necessarily suggest that the chosen Mars '01 Lander site should be located where good Viking stereotopography is available, an assessment of typical surface slopes at these scales for a range of surface types may be quite valuable in landing-site selection. Thus this study has a two-fold application: (1) to support the proposal of White Rock as a candidate Mars '01 Lander site, and (2) to evaluate how Viking high resolution stereotopography may be of value in the overall Mars '01 Lander site selection process.

  17. Building Virtual Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Goddard, C.

    2017-12-01

    Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.

  18. An improved three-dimension reconstruction method based on guided filter and Delaunay

    NASA Astrophysics Data System (ADS)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  19. Accuracy assessment of ALOS optical instruments: PRISM and AVNIR-2

    NASA Astrophysics Data System (ADS)

    Tadono, Takeo; Shimada, Masanobu; Iwata, Takanori; Takaku, Junichi; Kawamoto, Sachi

    2017-11-01

    This paper describes the updated results of calibration and validation to assess the accuracies for optical instruments onboard the Advanced Land Observing Satellite (ALOS, nicknamed "Daichi"), which was successfully launched on January 24th, 2006 and it is continuously operating very well. ALOS has an L-band Synthetic Aperture Radar called PALSAR and two optical instruments i.e. the Panchromatic Remotesensing Instrument for Stereo Mapping (PRISM) and the Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2). PRISM consists of three radiometers and is used to derive a digital surface model (DSM) with high spatial resolution that is an objective of the ALOS mission. Therefore, geometric calibration is important in generating a precise DSM with stereo pair images of PRISM. AVNIR-2 has four radiometric bands from blue to near infrared and uses for regional environment and disaster monitoring etc. The radiometric calibration and image quality evaluation are also important for AVNIR-2 as well as PRISM. This paper describes updated results of geometric calibration including geolocation determination accuracy evaluations of PRISM and AVNIR-2, image quality evaluation of PRISM, and validation of generated PRISM DSM. These works will be done during the ALOS mission life as an operational calibration to keep absolute accuracies of the standard products.

  20. Bathymetric mapping of submarine sand waves using multiangle sun glitter imagery: a case of the Taiwan Banks with ASTER stereo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-guo; Yang, Kang; Lou, Xiu-lin; Li, Dong-ling; Shi, Ai-qin; Fu, Bin

    2015-01-01

    Submarine sand waves are visible in optical sun glitter remote sensing images and multiangle observations can provide valuable information. We present a method for bathymetric mapping of submarine sand waves using multiangle sun glitter information from Advanced Spaceborne Thermal Emission and Reflection Radiometer stereo imagery. Based on a multiangle image geometry model and a sun glitter radiance transfer model, sea surface roughness is derived using multiangle sun glitter images. These results are then used for water depth inversions based on the Alpers-Hennings model, supported by a few true depth data points (sounding data). Case study results show that the inversion and true depths match well, with high-correlation coefficients and root-mean-square errors from 1.45 to 2.46 m, and relative errors from 5.48% to 8.12%. The proposed method has some advantages over previous methods in that it requires fewer true depth data points, it does not require environmental parameters or knowledge of sand-wave morphology, and it is relatively simple to operate. On this basis, we conclude that this method is effective in mapping submarine sand waves and we anticipate that it will also be applicable to other similar topography types.

  1. Kinder, gentler stereo

    NASA Astrophysics Data System (ADS)

    Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo

    1999-05-01

    Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.

  2. Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.; Howard, Andrew B.; Aghazarian, Hrand; Rankin, Arturo L.

    2012-01-01

    PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning.

  3. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  4. Rock Moved by Mars Lander Arm

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The robotic arm on NASA's Phoenix Mars Lander slid a rock out of the way during the mission's 117th Martian day (Sept. 22, 2008) to gain access to soil that had been underneath the rock.The lander's Surface Stereo Imager took the two images for this stereo view later the same day, showing the rock, called 'Headless,' after the arm pushed it about 40 centimeters (16 inches) from its previous location.

    'The rock ended up exactly where we intended it to,' said Matt Robinson of NASA's Jet Propulsion Laboratory, robotic arm flight software lead for the Phoenix team.

    The arm had enlarged the trench near Headless two days earlier in preparation for sliding the rock into the trench. The trench was dug to about 3 centimeters (1.2 inches) deep. The ground surface between the rock's prior position and the lip of the trench had a slope of about 3 degrees downward toward the trench. Headless is about the size and shape of a VHS videotape.

    The Phoenix science team sought to move the rock in order to study the soil and the depth to subsurface ice underneath where the rock had been.

    This image was taken at about 12:30 p.m., local solar time on Mars. The view is to the north northeast of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by JPL, Pasadena, Calif. Spacecraft development was by Lockheed Martin Space Systems, Denver.

  5. Stereoscopic imaging of gravity waves in the mesosphere over Per.

    NASA Astrophysics Data System (ADS)

    Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Chau, J. L.; Vidal, E.; Veliz, O.

    A program of stereo-imaging of the mesospheric near-infrared emissive layer has recently been initiated using two CCD cameras operating in a vis- a -vis observation mode at a separation distance of sim 550 km These images were analyzed using a stereo-correlation method suitable for low contrast objects without discrete contours This approach consists of calculating a normalized cross-correlation parameter for the intensities of matched points Initially the altitude of the layer is chosen to be between 82 and 92 km The computer code calculates the altitude of the centroid of the emissive layer for each observed point and produces surface maps of the layer for 50x50 km 2 areas In addition to results from the Peruvian observations results of simultaneous observations obtained at the Pic du Midi Pyr e n e es and the Ch a teau-Renard Alpes observatories will be presented The surface maps are compared with coded maps of the emission intensity Both types of maps show significant wave structures The vertical amplitude of the waves is found to be typically between 1 and 2 km The Fourier characteristics are measured using a Morlet type wavelet generator function The horizontal wavelengths in the meridional and zonal directions are sim 20-40 km and 100-150 km and the temporal periods are sim 15-30 minutes The same observational program was conducted in the Peruvian Andes in October 2005 The sites were the Cosmos Observatory 12 r 04 S 75 r 34 W altitude 4620m and the Cerro Verde Tellolo mountain 16 r 33 S

  6. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  7. Stereo images from space

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco

    2008-02-01

    The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D-Cinema's in Europe; b) the design challenges to fly the camera outside the ISS as opposed to ERB1 that was only meant to be used in the pressurized environment of the ISS; c) on-board stereo viewing on a stereo camera has been tackled in ERB1: trade offs between OLED and LCOS display technologies shall be presented; d) HD_SDI cameras versus USB2 or Firewire; e) the hardware compression ASIC solutions used to tackle the high data rate on-board; f) 3D geometry reconstruction: first attempts in reconstructing a computer model of the interior of the ISS starting form the stereo video available.

  8. Analysis of Low-Light and Night-Time Stereo-Pair Images for Photogrammetric Reconstruction

    NASA Astrophysics Data System (ADS)

    Santise, M.; Thoeni, K.; Roncella, R.; Diotri, F.; Giacomini, A.

    2018-05-01

    Rockfalls and rockslides represent a significant risk to human lives and infrastructures because of the high levels of energy involved in the phenomena. Generally, these events occur in accordance to specific environmental conditions, such as temperature variations between day and night, that can contribute to the triggering of structural instabilities in the rock-wall and the detachment of blocks and debris. The monitoring and the geostructural characterization of the wall are required for reducing the potential hazard and to improve the management of the risk at the bottom of the slopes affected by such phenomena. In this context, close range photogrammetry is largely used for the monitoring of high-mountain terrains and rock walls in mine sites allowing for periodic survey of rockfalls and wall movements. This work focuses on the analysis of low-light and night-time images of a fixed-base stereo pair photogrammetry system. The aim is to study the reliability of the images acquired over the night to produce digital surface models (DSMs) for change detection. The images are captured by a high-sensitivity DLSR camera using various settings accounting for different values of ISO, aperture and time of exposure. For each acquisition, the DSM is compared to a photogrammetric reference model produced by images captured in optimal illumination conditions. Results show that, with high level of ISO and maintaining the same grade of aperture, extending the exposure time improves the quality of the point clouds in terms of completeness and accuracy of the photogrammetric models.

  9. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  10. Global lunar-surface mapping experiment using the Lunar Imager/Spectrometer on SELENE

    NASA Astrophysics Data System (ADS)

    Haruyama, Junichi; Matsunaga, Tsuneo; Ohtake, Makiko; Morota, Tomokatsu; Honda, Chikatoshi; Yokota, Yasuhiro; Torii, Masaya; Ogawa, Yoshiko

    2008-04-01

    The Moon is the nearest celestial body to the Earth. Understanding the Moon is the most important issue confronting geosciences and planetary sciences. Japan will launch the lunar polar orbiter SELENE (Kaguya) (Kato et al., 2007) in 2007 as the first mission of the Japanese long-term lunar exploration program and acquire data for scientific knowledge and possible utilization of the Moon. An optical sensing instrument called the Lunar Imager/Spectrometer (LISM) is loaded on SELENE. The LISM requirements for the SELENE project are intended to provide high-resolution digital imagery and spectroscopic data for the entire lunar surface, acquiring these data for scientific knowledge and possible utilization of the Moon. Actually, LISM was designed to include three specialized sub-instruments: a terrain camera (TC), a multi-band imager (MI), and a spectral profiler (SP). The TC is a high-resolution stereo camera with 10-m spatial resolution from a SELENE nominal altitude of 100 km and a stereo angle of 30° to provide stereo pairs from which digital terrain models (DTMs) with a height resolution of 20 m or better will be produced. The MI is a multi-spectral imager with four and five color bands with 20 m and 60 m spatial resolution in visible and near-infrared ranges, which will provide data to be used to distinguish the geological units in detail. The SP is a line spectral profiler with a 400-m-wide footprint and 300 spectral bands with 6-8 nm spectral resolution in the visible to near-infrared ranges. The SP data will be sufficiently powerful to identify the lunar surface's mineral composition. Moreover, LISM will provide data with a spatial resolution, signal-to-noise ratio, and covered spectral range superior to that of past Earth-based and spacecraft-based observations. In addition to the hardware instrumentation, we have studied operation plans for global data acquisition within the limited total data volume allotment per day. Results show that the TC and MI can achieve global observations within the restrictions by sharing the TC and MI observation periods, adopting appropriate data compression, and executing necessary SELENE orbital plane change operations to ensure global coverage by MI. Pre-launch operation planning has resulted in possible global TC high-contrast imagery, TC stereoscopic imagery, and MI 9-band imagery in one nominal mission period. The SP will also acquire spectral line profiling data for nearly the entire lunar surface. The east-west interval of the SP strip data will be 3-4 km at the equator by the end of the mission and shorter at higher latitudes. We have proposed execution of SELENE roll cant operations three times during the nominal mission period to execute calibration site observations, and have reached agreement on this matter with the SELENE project. We present LISM global surface mapping experiments for instrumentation and operation plans. The ground processing systems and the data release plan for LISM data are discussed briefly.

  11. Resolution enhancement of tri-stereo remote sensing images by super resolution methods

    NASA Astrophysics Data System (ADS)

    Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif

    2016-10-01

    Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.

  12. Stereo Navi 2.0: software for stereotaxic surgery of the common marmoset (Callithrix jacchus).

    PubMed

    Tokuno, Hironobu; Tanaka, Ikuko; Umitsu, Yoshitomo; Nakamura, Yasuhisa

    2009-11-01

    Recently, we reported our web-accessible digital brain atlas of the common marmoset (Callithrix jacchus) at http://marmoset-brain.org:2008. Using digital images obtained during construction of this website, we developed stand-alone software for navigation of electrodes or injection needles for stereotaxic electrophysiological or anatomical experiments in vivo. This software enables us to draw lines on exchangeable section images, measure the length and angle of lines, superimpose a stereotaxic reference grid on the image, and send the image to the system clipboard. The software, Stereo Navi 2.0, is freely available at our brain atlas website.

  13. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    PubMed

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  14. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  15. HRSCview: a web-based data exploration system for the Mars Express HRSC instrument

    NASA Astrophysics Data System (ADS)

    Michael, G.; Walter, S.; Neukum, G.

    2007-08-01

    The High Resolution Stereo Camera (HRSC) on the ESA Mars Express spacecraft has been orbiting Mars since January 2004. By spring 2007 it had returned around 2 terabytes of image data, covering around 35% of the Martian surface in stereo and colour at a resolu-tion of 10-20 m/pixel. HRSCview provides a rapid means to explore these images up to their full resolu-tion with the data-subsetting, sub-sampling, stretching and compositing being carried out on-the-fly by the image server. It is a joint website of the Free University of Berlin and the German Aerospace Center (DLR). The system operates by on-the-fly processing of the six HRSC level-4 image products: the map-projected ortho-rectified nadir pan-chromatic and four colour channels, and the stereo-derived DTM (digital terrain model). The user generates a request via the web-page for an image with several parameters: the centre of the view in surface coordinates, the image resolution in metres/pixel, the image dimensions, and one of several colour modes. If there is HRSC coverage at the given location, the necessary segments are extracted from the full orbit images, resampled to the required resolution, and composited according to the user's choice. In all modes the nadir channel, which has the highest resolu-tion, is included in the composite so that the maximum detail is always retained. The images are stretched ac-cording to the current view: this applies to the eleva-tion colour scale, as well as the nadir brightness and the colour channels. There are modes for raw colour, stretched colour, enhanced colour (exaggerated colour differences), and a synthetic 'Mars-like' colour stretch. A colour ratio mode is given as an alternative way to examine colour differences (R=IR/R, G=R/G and B=G/B). The final image is packaged as a JPEG file and returned to the user over the web. Each request requires approximately 1 second to process. A link is provided from each view to a data product page, where header items describing the full map-projected science data product are displayed, and a direct link to the archived data products on the ESA Planetary Science Archive (PSA) is provided. At pre-sent the majority of the elevation composites are de-rived from the HRSC Preliminary 200m DTMs gener-ated at the German Aerospace Center (DLR), which will not be available as separately downloadable data products. These DTMs are being progressively super-seded by systematically generated higher resolution archival DTMs, also from DLR, which will become available for download through the PSA, and be simi-larly accessible via HRSCview. At the time of writing this abstract (May 2007), four such high resolution DTMs are available for download via the HRSCview data product pages (for images from orbits 0572, 0905, 1004, and 2039).

  16. Nighttime Clouds in Martian Arctic (Accelerated Movie)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    An angry looking sky is captured in a movie clip consisting of 10 frames taken by the Surface Stereo Imager on NASA's Phoenix Mars Lander.

    The clip accelerates the motion. The images were take around 3 a.m. local solar time at the Phoenix site during Sol 95 (Aug. 30), the 95th Martian day since landing.

    The swirling clouds may be moving generally in a westward direction over the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  17. A Review of Digital Image Correlation Applied to Structura Dynamics

    NASA Astrophysics Data System (ADS)

    Niezrecki, Christopher; Avitabile, Peter; Warren, Christopher; Pingle, Pawan; Helfrick, Mark

    2010-05-01

    A significant amount of interest exists in performing non-contacting, full-field surface velocity measurement. For many years traditional non-contacting surface velocity measurements have been made by using scanning Doppler laser vibrometry, shearography, pulsed laser interferometry, pulsed holography, or an electronic speckle pattern interferometer (ESPI). Three dimensional (3D) digital image correlation (DIC) methods utilize the alignment of a stereo pair of images to obtain full-field geometry data, in three dimensions. Information about the change in geometry of an object over time can be found by comparing a sequence of images and virtual strain gages (or position sensors) can be created over the entire visible surface of the object of interest. Digital imaging techniques were first developed in the 1980s but the technology has only recently been exploited in industry and research due to the advances of digital cameras and personal computers. The use of DIC for structural dynamic measurement has only very recently been investigated. Within this paper, the advantages and limits of using DIC for dynamic measurement are reviewed. Several examples of using DIC for dynamic measurement are presented on several vibrating and rotating structures.

  18. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  19. Flowfield Characteristics on a Retreating Rotor Blade

    DTIC Science & Technology

    2015-12-03

    dimensional airfoil aerodynamics. This project used stereo particle image velocimetry on a 2-bladed rotor at advance ratios of 0.7, 0.85 and 1.0...ABSTRACT 2. REPORT TYPE 17. LIMITATION OF ABSTRACT 15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM...attempted to make yaw corrections to 2- dimensional airfoil aerodynamics. This project used stereo particle image velocimetry on a 2-bladed rotor at advance

  20. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

Top