Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
NASA Astrophysics Data System (ADS)
Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.
1990-10-01
Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
NASA Technical Reports Server (NTRS)
Ivanov, Anton B.
2003-01-01
The Mars Orbiter Camera (MOC) has been operating on board of the Mars Global Surveyor (MGS) spacecraft since 1998. It consists of three cameras - Red and Blue Wide Angle cameras (FOV=140 deg.) and Narrow Angle camera (FOV=0.44 deg.). The Wide Angle camera allows surface resolution down to 230 m/pixel and the Narrow Angle camera - down to 1.5 m/pixel. This work is a continuation of the project, which we have reported previously. Since then we have refined and improved our stereo correlation algorithm and have processed many more stereo pairs. We will discuss results of our stereo pair analysis located in the Mars Exploration rovers (MER) landing sites and address feasibility of recovering topography from stereo pairs (especially in the polar regions), taken during MGS 'Relay-16' mode.
The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars
NASA Astrophysics Data System (ADS)
Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.
2014-04-01
The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
A system for extracting 3-dimensional measurements from a stereo pair of TV cameras
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Cunningham, R.
1976-01-01
Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
NASA Astrophysics Data System (ADS)
Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.
2009-09-01
The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Quasi-microscope concept for planetary missions.
Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D
1977-09-01
Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
The HRSC Experiment on Mars Express: First Imaging Results from the Commissioning Phase
NASA Astrophysics Data System (ADS)
Oberst, J.; Neukum, G.; Hoffmann, H.; Jaumann, R.; Hauber, E.; Albertz, J.; McCord, T. B.; Markiewicz, W. J.
2004-12-01
The ESA Mars Express spacecraft was launched from Baikonur on June 2, 2003, entered Mars orbit on December 25, 2003, and reached the nominal mapping orbit on January 28, 2004. Observing conditions were favorable early on for the HRSC (High Resolution Stereo Camera), designed for the mapping of the Martian surface in 3-D. The HRSC is a pushbroom scanner with 9 CCD line detectors mounted in parallel and perpendicular to the direction of flight on the focal plane. The camera can obtain images at high resolution (10 m/pix), in triple stereo (20 m/pix), in four colors, and at five different phase angles near-simultaneously. An additional Super-Resolution Channel (SRC) yields nested-in images at 2.3 m/pix for detailed photogeologic studies. Even for nominal spacecraft trajectory and camera pointing data from the commissioning phase, solid stereo image reconstructions are feasible. More yet, the three-line stereo data allow us to identify and correct errors in navigation data. We find that > 99% of the stereo rays intersect within a sphere of radius < 20m after orbit and pointing data correction. From the HRSC images we have produced Digital Terrain Models (DTMs) with pixel sizes of 200 m, some of them better. HRSC stereo models and data obtained by the MOLA (Mars Orbiting Laser Altimeter) show good qualitative agreement. Differences in absolute elevations are within 50 m, but may reach several 100 m in lateral positioning (mostly in the spacecraft along-track direction). After correction of these offsets, the HRSC topographic data conveniently fill the gaps between the MOLA tracks and reveal hitherto unrecognized morphologic detail. At the time of writing, the HRSC has covered approx. 22.5 million square kilometers of the Martian surface. In addition, data from 5 Phobos flybys from May through August 2004 were obtained. The HRSC is beginning to make major contributions to geoscience, atmospheric science, photogrammetry, and cartography of Mars (papers submitted to Nature).
HRSC: High resolution stereo camera
Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.
2009-01-01
The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.
Apollo 12 stereo view of lunar surface upon which astronaut had stepped
1969-11-20
AS12-57-8448 (19-20 Nov. 1969) --- An Apollo 12 stereo view showing a three-inch square of the lunar surface upon which an astronaut had stepped. Taken during extravehicular activity of astronauts Charles Conrad Jr. and Alan L. Bean, the exposure of the boot imprint was made with an Apollo 35mm stereo close-up camera. The camera was developed to get the highest possible resolution of a small area. The three-inch square is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. While astronauts Conrad and Bean descended in their Apollo 12 Lunar Module to explore the lunar surface, astronaut Richard F. Gordon Jr. remained with the Command and Service Modules in lunar orbit.
NASA Technical Reports Server (NTRS)
Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.
2013-01-01
We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.
Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC
NASA Astrophysics Data System (ADS)
Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.
Generalized parallel-perspective stereo mosaics from airborne video.
Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M
2004-02-01
In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.
NASA Astrophysics Data System (ADS)
Chung Liu, Wai; Wu, Bo; Wöhler, Christian
2018-02-01
Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.
2011-01-01
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.
NASA Astrophysics Data System (ADS)
Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco
2008-02-01
The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D-Cinema's in Europe; b) the design challenges to fly the camera outside the ISS as opposed to ERB1 that was only meant to be used in the pressurized environment of the ISS; c) on-board stereo viewing on a stereo camera has been tackled in ERB1: trade offs between OLED and LCOS display technologies shall be presented; d) HD_SDI cameras versus USB2 or Firewire; e) the hardware compression ASIC solutions used to tackle the high data rate on-board; f) 3D geometry reconstruction: first attempts in reconstructing a computer model of the interior of the ISS starting form the stereo video available.
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
NASA Astrophysics Data System (ADS)
Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.
2001-08-01
The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.
NASA Astrophysics Data System (ADS)
Liu, W. C.; Wu, B.
2018-04-01
High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment
NASA Astrophysics Data System (ADS)
Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.
2002-12-01
Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!
Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romps, David; Oktem, Rusen
2017-10-31
The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less
NASA Astrophysics Data System (ADS)
Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.
2016-07-01
The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.
Apollo 11 stereo view showing lump of surface powder with glassy material
1969-07-20
AS11-45-6704 (20 July 1969) --- An Apollo stereo view showing a close-up of a small lump of lunar surface powder about a half inch across, with a splash of a glassy material over it. It seems that a drop of molten material fell on it, splashed and froze. The exposure was made by the Apollo 11 35mm stereo close-up camera. The camera was specially developed to get the highest possible resolution of a small area. A three-inch square area is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. The pictures are in color and give a stereo view, enabling the fine detail to be seen very clearly. The project is under the direction of Professor T. Gold of Cornell University and Dr. F. Pearce of NASA. The camera was designed and built by Eastman Kodak. Professor E. Purcell of Harvard University and Dr. E. Land of the Polaroid Corporation have contributed to the project. The pictures brought back from the moon by the Apollo 11 crew are of excellent quality and allow fine detail of the undisturbed lunar surface to be seen. Scientists hope to be able to deduce from them some of the processes that have taken place that have shaped and modified the surface.
NASA Astrophysics Data System (ADS)
Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.
2013-12-01
Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.
NASA Astrophysics Data System (ADS)
Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred
During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.
Neukum, G; Jaumann, R; Hoffmann, H; Hauber, E; Head, J W; Basilevsky, A T; Ivanov, B A; Werner, S C; van Gasselt, S; Murray, J B; McCord, T
2004-12-23
The large-area coverage at a resolution of 10-20 metres per pixel in colour and three dimensions with the High Resolution Stereo Camera Experiment on the European Space Agency Mars Express Mission has made it possible to study the time-stratigraphic relationships of volcanic and glacial structures in unprecedented detail and give insight into the geological evolution of Mars. Here we show that calderas on five major volcanoes on Mars have undergone repeated activation and resurfacing during the last 20 per cent of martian history, with phases of activity as young as two million years, suggesting that the volcanoes are potentially still active today. Glacial deposits at the base of the Olympus Mons escarpment show evidence for repeated phases of activity as recently as about four million years ago. Morphological evidence is found that snow and ice deposition on the Olympus construct at elevations of more than 7,000 metres led to episodes of glacial activity at this height. Even now, water ice protected by an insulating layer of dust may be present at high altitudes on Olympus Mons.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
NASA Astrophysics Data System (ADS)
Griffiths, Andrew; Coates, Andrew; Muller, Jan-Peter; Jaumann, Ralf; Josset, Jean-Luc; Paar, Gerhard; Barnes, David
2010-05-01
The ExoMars mission has evolved into a joint European-US mission to deliver a trace gas orbiter and a pair of rovers to Mars in 2016 and 2018 respectively. The European rover will carry the Pasteur exobiology payload including the 1.56 kg Panoramic Camera. PanCam will provide multispectral stereo images with 34 deg horizontal field-of-view (580 microrad/pixel) Wide-Angle Cameras (WAC) and (83 microrad/pixel) colour monoscopic "zoom" images with 5 deg horizontal field-of-view High Resolution Camera (HRC). The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage [1]. Integrated with the WACs and HRC into the PanCam optical bench (which helps the instrument meet its planetary protection requirements) is the PanCam interface unit (PIU); which provides image storage, a Spacewire interface to the rover and DC-DC power conversion. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission [2] as well as providing multispectral geological imaging, colour and stereo panoramic images and solar images for water vapour abundance and dust optical depth measurements. The High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls. Additionally HRC will be used to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. In short, PanCam provides the overview and context for the ExoMars experiment locations, required to enable the exobiology aims of the mission. In addition to these baseline capabilities further enhancements are possible to PanCam to enhance it's effectiveness for astrobiology and planetary exploration: 1. Rover Inspection Mirror (RIM) 2. Organics Detection by Fluorescence Excitation (ODFE) LEDs [3-6] 3. UVIS broadband UV Flux and Opacity Determination (UVFOD) photodiode This paper will discuss the scientific objectives and resource impacts of these enhancements. References: 1. Griffiths, A.D., Coates, A.J., Josset, J.-L., Paar, G., Hofmann, B., Pullan, D., Ruffer, P., Sims, M.R., Pillinger, C.T., The Beagle 2 stereo camera system, Planet. Space Sci. 53, 1466-1488, 2005. 2. Paar, G., Oberst, J., Barnes, D.P., Griffiths, A.D., Jaumann, R., Coates, A.J., Muller, J.P., Gao, Y., Li, R., 2007, Requirements and Solutions for ExoMars Rover Panoramic Camera 3d Vision Processing, abstract submitted to EGU meeting, Vienna, 2007. 3. Storrie-Lombardi, M.C., Hug, W.F., McDonald, G.D., Tsapin, A.I., and Nealson, K.H. 2001. Hollow cathode ion lasers for deep ultraviolet Raman spectroscopy and fluorescence imaging. Rev. Sci. Ins., 72 (12), 4452-4459. 4. Nealson, K.H., Tsapin, A., and Storrie-Lombardi, M. 2002. Searching for life in the universe: unconventional methods for an unconventional problem. International Microbiology, 5, 223-230. 5. Mormile, M.R. and Storrie-Lombardi, M.C. 2005. The use of ultraviolet excitation of native fluorescence for identifying biomarkers in halite crystals. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 246-253. 6. Storrie-Lombardi, M.C. 2005. Post-Bayesian strategies to optimize astrobiology instrument suites: lessons from Antarctica and the Pilbara. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 288-301.
"Stereo Compton cameras" for the 3-D localization of radioisotopes
NASA Astrophysics Data System (ADS)
Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.
2014-11-01
The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.
Mars Global Coverage by Context Camera on MRO
2017-03-29
In early 2017, after more than a decade of observing Mars, the Context Camera (CTX) on NASA's Mars Reconnaissance Orbiter (MRO) surpassed 99 percent coverage of the entire planet. This mosaic shows that global coverage. No other camera has ever imaged so much of Mars in such high resolution. The mosaic offers a resolution that enables zooming in for more detail of any region of Mars. It is still far from the full resolution of individual CTX observations, which can reveal the shapes of features smaller than the size of a tennis court. As of March 2017, the Context Camera has taken about 90,000 images since the spacecraft began examining Mars from orbit in late 2006. In addition to covering 99.1 percent of the surface of Mars at least once, this camera has observed more than 60 percent of Mars more than once, checking for changes over time and providing stereo pairs for 3-D modeling of the surface. http://photojournal.jpl.nasa.gov/catalog/PIA21488
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
Restoration Of MEX SRC Images For Improved Topography: A New Image Product
NASA Astrophysics Data System (ADS)
Duxbury, T. C.
2012-12-01
Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived topographic accuracy.
Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.
Huck, F O; Wall, S D
1976-07-01
Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.
MRO CTX-based Digital Terrain Models
NASA Astrophysics Data System (ADS)
Dumke, Alexander
2016-04-01
In planetary surface sciences, digital terrain models (DTM) are paramount when it comes to understanding and quantifying processes. In this contribution an approach for the derivation of digital terrain models from stereo images of the NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) are described. CTX consists of a 350 mm focal length telescope and 5000 CCD sensor elements and is operated as pushbroom camera. It acquires images with ~6 m/px over a swath width of ~30 km of the Mars surface [1]. Today, several approaches for the derivation of CTX DTMs exist [e. g. 2, 3, 4]. The discussed approach here is based on established software and combines them with proprietary software as described below. The main processing task for the derivation of CTX stereo DTMs is based on six steps: (1) First, CTX images are radiometrically corrected using the ISIS software package [5]. (2) For selected CTX stereo images, exterior orientation data from reconstructed NAIF SPICE data are extracted [6]. (3) In the next step High Resolution Stereo Camera (HRSC) DTMs [7, 8, 9] are used for the rectification of CTX stereo images to reduce the search area during the image matching. Here, HRSC DTMs are used due to their higher spatial resolution when compared to MOLA DTMs. (4) The determination of coordinates of homologous points between stereo images, i.e. the stereo image matching process, consists of two steps: first, a cross-correlation to obtain approximate values and secondly, their use in a least-square matching (LSM) process in order to obtain subpixel positions. (5) The stereo matching results are then used to generate object points from forward ray intersections. (6) As a last step, the DTM-raster generation is performed using software developed at the German Aerospace Center, Berlin. Whereby only object points are used that have a smaller error than a threshold value. References: [1] Malin, M. C. et al., 2007, JGR 112, doi:10.1029/2006JE002808 [2] Broxton, M. J. et al., 2008, LPSC XXXIX, Abstract#2419 [3] Yershov, V. et al., 2015 EPSC 10, EPSC2015-343 [4] Kim, J. R. et al., 2013 EPS 65, 799-809 [5] https://isis.astrogeology.usgs.gov/index.html [6] http://naif.jpl.nasa.gov/naif/index.html [7] Gwinner et al., 2010, EPS 294, 543-540 [8] Gwinner et al., 2015, PSS [9] Dumke, A. et al., 2008, ISPRS, 37, Part B4, 1037-1042
Cloud photogrammetry with dense stereo for fisheye cameras
NASA Astrophysics Data System (ADS)
Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens
2016-11-01
We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
Innovative Camera and Image Processing System to Characterize Cryospheric Changes
NASA Astrophysics Data System (ADS)
Schenk, A.; Csatho, B. M.; Nagarajan, S.
2010-12-01
The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
Infrared stereo calibration for unmanned ground vehicle navigation
NASA Astrophysics Data System (ADS)
Harguess, Josh; Strange, Shawn
2014-06-01
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
The mosaics of Mars: As seen by the Viking Lander cameras
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Jones, K. L.
1980-01-01
The mosaics and derivative products produced from many individual high resolution images acquired by the Viking Lander Camera Systems are described: A morning and afternoon mosaic for both cameras at the Lander 1 Chryse Planitia site, and a morning, noon, and afternoon camera pair at Utopia Planitia, the Lander 11 site. The derived products include special geometric projections of the mosaic data sets, polar stereographic (donut), stereoscopic, and orthographic. Contour maps and vertical profiles of the topography were overlaid on the mosaics from which they were derived. Sets of stereo pairs were extracted and enlarged from stereoscopic projections of the mosaics.
GF-7 Imaging Simulation and Dsm Accuracy Estimate
NASA Astrophysics Data System (ADS)
Yue, Q.; Tang, X.; Gao, X.
2017-05-01
GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.
Extracting accurate and precise topography from LROC narrow angle camera stereo observations
NASA Astrophysics Data System (ADS)
Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.
2017-02-01
The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur across all of the terrestrial planets.
NASA Astrophysics Data System (ADS)
Swain, Pradyumna; Mark, David
2004-09-01
The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.
NASA Technical Reports Server (NTRS)
Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.
2017-01-01
Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.
3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Spruce, Joseph
2010-01-01
An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
NASA Astrophysics Data System (ADS)
Ivanov, A. B.; Rossi, A.
2009-04-01
Studies of layered terrains in polar regions as well as inside craters and other areas on Mars often require knowledge of local topography at much finer resolution than global MOLA topography allows. For example, in the polar layered deposits spatial relationships are important to understand unconformities that are observed on the edges of the layered terrains [15,3]. Their formation process is not understood at this point, yet fine scale topography, joint with ground penetrating radar like SHARAD and MARSIS may shed light on their 3D structure. Landing site analysis also requires knowledge of local slopes and roughness at scales from 1 to 10 m [1,2]. Mars Orbiter Camera [13] has taken stereo images at these scales, however interpretation was difficult due to unstable behavior of the Mars Global Surveyor spacecraft during image take (wobbling effect). Mars Reconnaissance Orbiter (MRO) is much better stabilized, since it is required for optimal operation of its high resolution camera. In this work we have utilized data from MRO sensors (CTX camera [11] and HIRISE camera [12] in order to derive digital elevation models (DEM) from images targeted as stereo pairs. We employed methods and approaches utilized for the Mars Orbiter Camera (MOC) stereo data [4,5]. CTX data varies in resolution and stereo pairs analyzed in this work can be derived at approximately 10m scale. HIRISE images allow DEM post spacing at around 1 meter. The latter are very big images and our computer infrastructure was only able to process either reduced resolution images, covering larger surface or working with smaller patches at the original resolution. We employed stereo matching technique described in [5,9], in conjunction with radiometric and geometric image processing in ISIS3 [16]. This technique is capable of deriving tiepoint co-registration at subpixel precision and has proven itself when used for Pathfinder and MER operations [8]. Considerable part of this work was to accommodate CTX and HIRISE image processing in the existing data processing pipeline and improve it at the same time. Currently the workflow is not finished: DEM units are relative and are not in elevation. We have been able to derive successful DEMs from CTX data Becquerel [14] and Crommelin craters as well as for some areas in the North Polar Layered Terrain. Due to its tremendous resolution HIRISE data showing great surface detail, hence allowing better correlation than other sensors considered in this work. In all cases DEM were showing considerable potential for exploration of terrain characteristics. Next steps include cross validation results with DEM produced by other teams and sensors (HRSC [6], HIRISE [7]) and providing elevation in terms of absolute height over a MOLA areoid. MRO imaging data allows us an unprecedented look at Martian terrain. This work provides a step forward derivation of DEM from HIRISE and CTX datasets and currently undergoing validation vs. other existing datasets. We will present our latest results for layering structures in both North and South Polar Layered deposits as well as layered structures inside Becquerel and Crommelin craters. Digital Elevation models derived from the CTX sensor can also be utilized effectively as a input for clutter reduction models, which are in turn used for the ground penetrating SHARAD instrument [13]. References. [1] R. Arvidson, et al. Mars exploration program 2007 phoenix landing site selection and characteristics. Journal of Geophysical Research-Planets, 113, JUN 19 2008. [2] M. Golombek, et al. Assessment of mars exploration rover landing site predictions. Nature, 436(7047):44-48, JUL 7 2005. [3] K. E. Herkenhoff, et al. Meter-scale morphology of the north polar region of mars. Science, 317(5845):1711-1715, SEP 21 2007. [4] A. B. Ivanov. Ten-Meter Scale Topography and Roughness of Mars Exploration Rovers Landing Sites and Martian Polar Regions. volume 34 of Lunar and Planetary Inst. Technical Report, pages 2084-+, Mar. 2003. [5] A. B. Ivanov and J. J. Lorre. Analysis of Mars Orbiter Camera Stereo Pairs. In Lunar and Planetary Institute Conference Abstracts, volume 33 of Lunar and Planetary Inst. Technical Report, pages 1845-+, Mar. 2002. [6] R. Jaumann, et al. The high-resolution stereo camera (HRSC) experiment on mars express: Instrument aspects and experiment conduct from interplanetary cruise through the nominal mission. Planetary and Space Science, 55(7-8):928-952, MAY 2007. [7] R. L. Kirk, et al. Ultrahigh resolution topographic mapping of mars with MRO HIRISE stereo images: Meter-scale slopes of candidate phoenix landing sites. Journal of Geophysical Research-Planets, 113, NOV 15 2008. [8] S. Lavoie, et al. Processing and analysis of mars pathfinder science data at the jet propulsion laboratory's science data processing systems section. Journal of Geophysical Research-Planets, 104(E4):8831-8852, APR 25 1999. [9] J. J. Lorre, et al. Recent developments at JPL in the application of image processing to astronomy. In D. L. Crawford, editor, Instrumentation in Astronomy III, volume 172 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages 394-402, 1979. [10] M. Malin, et al. Early views of the martian surface from the mars orbiter camera of mars global surveyor. Science, 279(5357):1681-1685, MAR 13 1998. [11] M. C. Malin,et al. Context camera investigation on board the mars reconnaissance orbiter. Journal of Geophysical Research-Planets, 112(E5), MAY 18 2007. [12] A. S. McEwen, et al.. Mars reconnaissance orbiter's high resolution imaging science experiment (HIRISE). Journal of Geophysical Research-Planets, 112(E5), MAY 17 2007. [13] A. Rossi, et al. Multi-spacecraft synergy with MEX HRSC and MRO SHARAD: Light-Toned Deposits in crater bulges. AGU Fall Meeting Abstracts, pages B1371+, Dec. 2008. [14] A. P. Rossi, et al. Stratigraphic architecture and structural control on sediment emplacement in Becquerel crater (Mars). volume 40. Lunar and Planetary Science Institute, 2009. [15] K. L. Tanaka,et al. North polar region of mars: Advances in stratigraphy, structure, and erosional modification, AUG 2008. Icarus. [16] USGS. Planetary image processing software: ISIS3. http://isis.astrogeology.usgs.gov/
Bifocal Stereo for Multipath Person Re-Identification
NASA Astrophysics Data System (ADS)
Blott, G.; Heipke, C.
2017-11-01
This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.
Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras
NASA Astrophysics Data System (ADS)
Holdener, D.; Nebiker, S.; Blaser, S.
2017-11-01
The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.
Murray, John B; Muller, Jan-Peter; Neukum, Gerhard; Werner, Stephanie C; van Gasselt, Stephan; Hauber, Ernst; Markiewicz, Wojciech J; Head, James W; Foing, Bernard H; Page, David; Mitchell, Karl L; Portyankina, Ganna
2005-03-17
It is thought that the Cerberus Fossae fissures on Mars were the source of both lava and water floods two to ten million years ago. Evidence for the resulting lava plains has been identified in eastern Elysium, but seas and lakes from these fissures and previous water flooding events were presumed to have evaporated and sublimed away. Here we present High Resolution Stereo Camera images from the European Space Agency Mars Express spacecraft that indicate that such lakes may still exist. We infer that the evidence is consistent with a frozen body of water, with surface pack-ice, around 5 degrees north latitude and 150 degrees east longitude in southern Elysium. The frozen lake measures about 800 x 900 km in lateral extent and may be up to 45 metres deep--similar in size and depth to the North Sea. From crater counts, we determined its age to be 5 +/- 2 million years old. If our interpretation is confirmed, this is a place that might preserve evidence of primitive life, if it has ever developed on Mars.
Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele
2016-07-01
The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
MISR Stereo-heights of Grassland Fire Smoke Plumes in Australia
NASA Astrophysics Data System (ADS)
Mims, S. R.; Kahn, R. A.; Moroney, C. M.; Gaitley, B. J.; Nelson, D. L.; Garay, M. J.
2008-12-01
Plume heights from wildfires are used in climate modeling to predict and understand trends in aerosol transport. This study examines whether smoke from grassland fires in the desert region of Western and central Australia ever rises above the relatively stable atmospheric boundary layer and accumulates in higher layers of relative atmospheric stability. Several methods for deriving plume heights from the Multi-angle Imaging SpectroRadiometer (MISR) instrument are examined for fire events during the summer 2000 and 2002 burning seasons. Using MISR's multi-angle stereo-imagery from its three near-nadir-viewing cameras, an automatic algorithm routinely derives the stereo-heights above the geoid of the level-of-maximum-contrast for the entire global data set, which often correspond to the heights of clouds and aerosol plumes. Most of the fires that occur in the cases studied here are small, diffuse, and difficult to detect. To increase the signal from these thin hazes, the MISR enhanced stereo product that computes stereo heights from the most steeply viewing MISR cameras is used. For some cases, a third approach to retrieving plume heights from MISR stereo imaging observations, the MISR Interactive Explorer (MINX) tool, is employed to help differentiate between smoke and cloud. To provide context and to search for correlative factors, stereo-heights are combined with data providing fire strength from the Moderate-resolution Imaging Spectroradiometer (MODIS) instrument, atmospheric structure from the NCEP/NCAR Reanalysis Project, surface cover from the Australia National Vegetation Information System, and forward and backward trajectories from the NOAA HYSPLIT model. Although most smoke plumes concentrate in the near-surface boundary layer, as expected, some appear to rise higher. These findings suggest that a closer examination of grassland fire energetics may be warranted.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
NASA Astrophysics Data System (ADS)
Hu, H.; Wu, B.
2017-07-01
The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity map for the stereo pair and each correspondence is transformed back to the owner and 3D points are derived through photogrammetric space intersection. Experimental results reveal that the proposed method is able to reduce gaps and inconsistencies caused by the inaccurate boresight offsets between the two NAC cameras and the irregular overlapping regions, and finally generate precise and consistent 3D surface models from the NAC stereo images automatically.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
Massive stereo-based DTM production for Mars on cloud computers
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.
2018-05-01
Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.
Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr
2017-01-01
High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
Left Panorama of Spirit's Landing Site
NASA Technical Reports Server (NTRS)
2004-01-01
Left Panorama of Spirit's Landing Site
This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.
2003-04-01
The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Research on the feature set construction method for spherical stereo vision
NASA Astrophysics Data System (ADS)
Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia
2015-01-01
Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.
NASA Astrophysics Data System (ADS)
McCord, T. B.; Combe, J.-P.; Hayne, P. O.
We are investigating the composition of the Martian surface partly by mapping the small spatial variations of water ice and salt minerals using the spectral images provided by the High Resolution Stereo Camera (HRSC). In order to identify the main mineral components, high spectral resolution data from the Observatoire pour la Mineralogie, l'Eau, les Glaces et l'Activite (OMEGA) imaging spectrometer are used. The join analysis of these two dataset makes the most of their respective abilities and, because of that, it requires a close agreement of their calibration [1]. The first part of this work is a comparison of HRSC and OMEGA measurements, exploration of atmosphere effects and checks of calibration. Then, an attempt to detect and map quantitatively at high spatial resolution (1) water ice both at the poles and in equatorial regions and (2) salts minerals is performed by exploring the spectral types evidenced in HRSC color data. For a given region, these two materials do or could represent additional endmember compositional units detectable with HRSC in addition to the basic units so far: 1) dark rock (basalt) and 2) red rock (iron oxide-rich material) [1]. Both materials also have been reported detected by OMEGA, but at much lower spatial resolution than HRSC. An ice mapping of the north polar regions is performed with OMEGA data by using a spectral index calibrated to ice fraction by using a set of linear combinations of various categories of materials with ice. In addition, a linear spectral unmixing model is used on HRSC data. Both ice fraction maps produce similar quantitative results, allowing us to interpret HRSC data at their full spatial resolution. Low-latitude sites are also explored where past but recent glacial activities have been reported as possible evidence of current water-ice. This includes looking for fresh frost and changes with time. The salt detection with HRSC firstly focused on the Candor Chasma area, where salt have been reported by using OMEGA [2]. The present work extends the analysis to other regions in order to constrain better the general geology and climate of Mars. References: [1] McCord T. B., et al. (2006). The Mars Express High Resolution Stereo Camera spectrophotometric data: Characteristics and science analysis, JGR, submitted. [2] Gendrin, A., N. Mangold, J-P. Bibring, Y. Langevin, B. Gondet, F. Poulet, G. Bonello, C. Quantin, J. Mustard, R. Arvidson, S. LeMouelic (2005), Sulfates in Martian layered terrains: The OMEGA/Mars Express View, Science, 307, 1587-1591
Indoor calibration for stereoscopic camera STC: a new method
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.
2017-11-01
In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.
Indoor Calibration for Stereoscopic Camera STC, A New Method
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.
2014-10-01
In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.
NASA Astrophysics Data System (ADS)
Vest Sørensen, Erik; Pedersen, Asger Ken
2017-04-01
Digital photogrammetry is used to map important volcanic marker horizons within the Nuussuaq Basin, West Greenland. We use a combination of oblique stereo images acquired from helicopter using handheld cameras and traditional aerial photographs. The oblique imagery consists of scanned stereo photographs acquired with analogue cameras in the 90´ties and newer digital images acquired with high resolution digital consumer cameras. Photogrammetric software packages SOCET SET and 3D Stereo Blend are used for controlling the seamless movement between stereo-models at different scales and viewing angles and the mapping is done stereoscopically using 3d monitors and the human stereopsis. The approach allows us to map in three dimensions three characteristic marker horizons (Tunoqqu, Kûgánguaq and Qordlortorssuaq Members) within the picritic Vaigat Formation. They formed toward the end of the same volcanic episode and are believed to be closely related in time. They formed an approximately coherent sub-horizontal surface, the Tunoqqu Surface that at the time of formation covered more than 3100 km2 on Disko and Nuussuaq. Our mapping shows that the Tunoqqu Surface is now segmented into areas of different elevation and structural trend as a result of later tectonic deformation. This is most notable on Nuussuaq where the western part is elevated and in parts highly faulted. In western Nuussuaq the surface has been uplifted and faulted so that it now forms an asymmetric anticline. The flanks of the anticline are coincident with two N-S oriented pre-Tunoqqu extensional faults. The deformation of the Tunoqqu surface could be explained by inversion of older extensional faults due to an overall E-W directed compressive regime in the late Paleocene.
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
High-definition television evaluation for remote handling task performance
NASA Astrophysics Data System (ADS)
Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.
Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.
The PanCam instrument on the 2018 Exomars rover: Scientific objectives
NASA Astrophysics Data System (ADS)
Jaumann, Ralf; Coates, Andrew; Hauber, Ernst; Hoffmann, Harald; Schmitz, Nicole; Le Deit, Laetitia; Tirsch, Daniela; Paar, Gerhard; Griffiths, Andrew
2010-05-01
The Exomars Panoramic Camera System is an imaging suite of three camera heads to be mounted on the ExoMars rover`s mast, with the boresight 1.8 m above ground. As late as the ExoMars Pasteur Payload Design Review (PDR) in 2009, the PanCam consists of two identical wide angle cameras (WAC) with fixed focal length lenses, and a high resolution camera (HRC) with an automatic focus mechanism, placed adjacent to the right WAC. The WAC stereo pair provides binocular vision for stereoscopic studies as well as 12 filter positions (per camera) for stereoscopic colour imaging and scientific multispectral studies. The stereo baseline of the pair is 500 mm. The two WAC have 22 mm focal length, f/10 lenses that illuminate detectors with 1024 × 1024 pixels. WAC lenses are fixed, with an optimal focus set to 4 m, and a focus ranging from 1.2 m (corresponding to the nearest view of the calibration target on the rover deck) to infinity. The HRC is able to focus between 0.9 m (distance to a drill core on the rover`s sample tray) and infinity. The instantaneous field of views of WAC and HRC are 580 μrad/pixel and 83 μrad/pixel, respectively. The corresponding resolution (in mm/pixel) at a distance of 2 m are 1.2 (WAC) and 0.17 (HRC), at 100 m distance it is 58 (WAC) and 8.3 (HRC). WAC and HRC will be geometrically co-aligned. The main scientific goal of PanCam is the geologic characterisation of the environment in which the rover is operating, providing the context for investigations carried out by the other instruments of the Pasteur payload. PanCam data will serve as a bridge between orbital data (high-resolution images from HRSC, CTX, and HiRISE, and spectrometer data from OMEGA and CRISM) and the data acquired in situ on the Martian surface. The position of HRC on top of the rover`s mast enables the detailed panoramic inspection of surface features over the full horizontal range of 360° even at large distances, an important prerequisite to identify the scientifically most promising targets and to plan the rover`s traverse. Key to success of PanCam is the provision of data that allow the determination of rock lithology, either of boulders on the surface or of outcrops. This task requires high spatial resolution as well as colour capabilities. The stereo images provide complementary information on the three-dimensional properties (i.e. the shape) of rocks. As an example, the degree of rounding of rocks as a result of fluvial transport can reveal the erosional history of the investigated particles, with possible implications on the chronology and intensity of rock-water interaction. The identification of lithology and geological history of rocks will strongly benefit from the co-aligned views of WAC (colour, stereo) and HRC (high spatial resolution), which will ensure that 3D and multispectral information is available together with fine-scale textural information for each scene. Stereo information is also of utmost importance for the determination of outcrop geometry (e.g., strike and dip of layered sequences), which helps to understand the emplacement history of sedimentary and volcanic rocks (e.g., cross-bedding, unconformities, etc.). PanCam will further reveal physical soil properties such as cohesion by imaging sites where the soil is disturbed by the rover`s wheels and the drill. Another essential task of PanCam is the imaging of samples (from the drill) before ingestion into the rover for further analysis by other instruments. PanCam can be tilted vertically and will also study the atmosphere (e.g., dust loading, opacity, clouds) and aeolian processes related to surface-atmosphere interactions, such as dust devils.
SU-E-J-184: Stereo Time-Of-Flight System for Patient Positioning in Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wentz, T; Gilles, M; Visvikis, D
2014-06-01
Purpose: The objective of this work is to test the advantage of using the surface acquired by two stereo Time-of-Flight (ToF) cameras in comparison of the use of one camera only for patient positioning in radiotherapy. Methods: A first step consisted on validating the use of a stereo ToFcamera system for positioning management of a phantom mounted on a linear actuator producing very accurate and repeatable displacements. The displacements between two positions were computed from the surface point cloud acquired by either one or two cameras thanks to an iterative closest point algorithm. A second step consisted on determining themore » displacements on patient datasets, with two cameras fixed on the ceiling of the radiotherapy room. Measurements were done first on voluntary subject with fixed translations, then on patients during the normal clinical radiotherapy routine. Results: The phantom tests showed a major improvement in lateral and depth axis for motions above 10 mm when using the stereo-system instead of a unique camera (Fig1). Patient measurements validate these results with a mean real and measured displacement differences in the depth direction of 1.5 mm when using one camera and 0.9 mm when using two cameras (Fig2). In the lateral direction, a mean difference of 1 mm was obtained by the stereo-system instead of 3.2 mm. Along the longitudinal axis mean differences of 5.4 and 3.4 mm with one and two cameras respectively were noticed but these measurements were still inaccurate and globally underestimated in this direction as in the literature. Similar results were also found for patient subjects with a mean difference reduction of 35%, 7%, and 25% for the lateral, depth, and longitudinal displacement with the stereo-system. Conclusion: The addition of a second ToF-camera to determine patient displacement strongly improved patient repositioning results and therefore insures better radiation delivery.« less
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
Test Image of Earth Rocks by Mars Camera Stereo
2010-11-16
This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.
NASA Astrophysics Data System (ADS)
Schmitz, Nicole; Jaumann, Ralf; Coates, Andrew; Griffiths, Andrew; Hauber, Ernst; Trauthan, Frank; Paar, Gerhard; Barnes, Dave; Bauer, Arnold; Cousins, Claire
2010-05-01
Geologic context as a combination of orbital imaging and surface vision, including range, resolution, stereo, and multispectral imaging, is commonly regarded as basic requirement for remote robotic geology and forms the first tier of any multi-instrument strategy for investigating and eventually understanding the geology of a region from a robotic platform. Missions with objectives beyond a pure geologic survey, e.g. exobiology objectives, require goal-oriented operational procedures, where the iterative process of scientific observation, hypothesis, testing, and synthesis, performed via a sol-by-sol data exchange with a remote robot, is supported by a powerful vision system. Beyond allowing a thorough geological mapping of the surface (soil, rocks and outcrops) in 3D, using wide angle stereo imagery, such a system needs to be able to provide detailed visual information on targets of interest in high resolution, thereby enabling the selection of science targets and samples for further analysis with a specialized in-situ instrument suite. Surface vision for ESA's upcoming ExoMars rover will come from a dedicated Panoramic Camera System (PanCam). As integral part of the Pasteur payload package, the PanCam is designed to support the search for evidence of biological processes by obtaining wide angle multispectral stereoscopic panoramic images and high resolution RGB images from the mast of the rover [1]. The camera system will consist of two identical wide-angle cameras (WACs), which are arranged on a common pan-tilt mechanism, with a fixed stereo base length of 50 cm. The WACs are being complemented by a High Resolution Camera (HRC), mounted between the WACs, which allows a magnification of selected targets by a factor of ~8 with respect to the wide-angle optics. The high-resolution images together with the multispectral and stereo capabilities of the camera will be of unprecedented quality for the identification of water-related surface features (such as sedimentary rocks) and form one key to a successful implementation of ESA's multi-level strategy for the ExoMars Reference Surface Mission. A dedicated PanCam Science Implementation Strategy is under development, which connects the PanCam science objectives and needs of the ExoMars Surface Mission with the required investigations, planned measurement approach and sequence, and connected mission requirements. First step of this strategy is obtaining geological context to enable the decision where to send the rover. PanCam (in combination with Wisdom) will be used to obtain ground truth by a thorough geomorphologic mapping of the ExoMars rover's surroundings in near and far range in the form of (1) RGB or monochromatic full (i.e. 360°) or partial stereo panoramas for morphologic and textural information and stereo ranging, (2) mosaics or single images with partly or full multispectral coverage to assess the mineralogy of surface materials as well as their weathering state and possible past or present alteration processes and (3) small-scale high-resolution information on targets/features of interest, and distant or inaccessible sites. This general survey phase will lead to the identification of surface features like outcrops, ridges and troughs and the characterization of different rock and surface units based on their morphology, distribution, and spectral and physical properties. Evidence of water-bearing minerals, water-altered rocks or even water-lain sediments seen in the large-scale wide angle images will then allow for preselecting those targets/features considered relevant for detailed analysis and definition of their geologic context. Detailed characterization and, subsequently, selection of those preselected targets/features for further analysis will then be enabled by color high-resolution imagery, followed by the next tier of contact instruments to enable a decision on whether or not to acquire samples for further analysis. During the following drill/analysis phase, PanCam's High Resolution Camera will characterize the sample in the sample tray and observe the sample discharge into the Core Sample Transfer Mechanism. Key parts of this science strategy have been tested under laboratory conditions in two geology blind tests [2] and during two field test campaigns in Svalbard, using simulated mission conditions, an ExoMars representative Payload (ExoMars and MSL instrument breadboards), and Mars analog settings [3, 4]. The experiences gained are being translated into operational sequences, and, together with the science implementation strategy, form a first version of a PanCam Surface Operations plan. References: [1] Griffiths, A.D. et al. (2006) International Journal of Astrobiology 5 (3): 269-275, doi:10.1017/ S1473550406003387. [2] Pullan, D. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-514. [3] Schmitz, N. et al. (2009) Geophysical Research Abstracts, Vol. 11, EGU2009-10621-2. [4] Cousins, C. et al. (2009) EPSC Abstracts, Vol. 4, EPSC2009-813.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
NASA's Solar Observing Fleet Watch Comet ISON's Journey Around the Sun
2013-11-22
Comet ISON makes its appearance into the higher-resolution HI-1 camera on the STEREO-A spacecraft. The dark "clouds" coming from the right are density enhancements in the solar wind, causing all the ripples in comet Encke's tail. These kinds of solar wind interactions give us valuable information about solar wind conditions near the sun. Note: the STEREO-A spacecraft is currently located on the other side of the Sun, so it sees a totally different geometry to what we see from Earth. Credit: Karl Battams/NASA/STEREO/CIOC NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
3-d Modeling of Comet Borrelly's Nucleus
NASA Astrophysics Data System (ADS)
Giese, B.; Oberst, J.; Howington-Kraus, E.; Kirk, R.; Soderblom, L.; Ds1 Science Team
During the DS1 encounter with comet Borrelly, the onboard camera MICAS (Minia- ture Integrated Camera and Spectrometer) acquired a series of images with spectac- ular detail [1]. Two of the highest resolution frames (58m/pxl, 47m/pxl) formed an effective stereo pair (8 deg convergence angle), on the basis of which teams at DLR and the USGS derived topographic models. Though different approaches were used in the analysis, the results are in remarkable agreement. The horizontal resolution of the stereo models is approx. 500m, and their vertical precision is expected to be in the range of 100m-150m, but perhaps three times worse in places with low surface texture. The visible area of the elongated nucleus (long axis approx. 8km, short axis approx. 4km) is characterized by a dichotomy. The "upper" end (toward the top of the image, as conventionally displayed) is gently tilted relative to the reference image plane and shows slopes of up to 40 deg towards the limb. The other end is smaller and canted relative to the "upper" end by approx. 35 deg in the direction towards the camera. Slopes towards the limb appear to be as high as 70 deg. The presence of faults and fractures near the boundary between the two ends additionally supports the view of a dichotomy. Perhaps, the nucleus is a contact binary, which formed by a collisional event. [1] Soderblom et al. (2002), submitted to Science.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
Motorcycle detection and counting using stereo camera, IR camera, and microphone array
NASA Astrophysics Data System (ADS)
Ling, Bo; Gibson, David R. P.; Middleton, Dan
2013-03-01
Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Double biprism arrays design using for stereo-photography of mobile phone camera
NASA Astrophysics Data System (ADS)
Sun, Wen-Shing; Chu, Pu-Yi; Chao, Yu-Hao; Pan, Jui-Wen; Tien, Chuen-Lin
2016-11-01
Generally, mobile phone use one camera to catch the image, and it is hard to get stereo image pair. Adding a biprism array can help that get the image pair easily. So users can use their mobile phone to catch the stereo image anywhere by adding a biprism array, and if they want to get a normal image just remove it. Using biprism arrays will induce chromatic aberration. Therefore, we design a double biprism arrays to reduce chromatic aberration.
Airborne camera and spectrometer experiments and data evaluation
NASA Astrophysics Data System (ADS)
Lehmann, F. F.; Bucher, T.; Pless, S.; Wohlfeil, J.; Hirschmüller, H.
2009-09-01
New stereo push broom camera systems have been developed at German Aerospace Centre (DLR). The new small multispectral systems (Multi Functional Camerahead - MFC, Advanced Multispectral Scanner - AMS) are light weight, compact and display three or five RGB stereo lines of 8000, 10 000 or 14 000 pixels, which are used for stereo processing and the generation of Digital Surface Models (DSM) and near True Orthoimage Mosaics (TOM). Simultaneous acquisition of different types of MFC-cameras for infrared and RGB data has been successfully tested. All spectral channels record the image data in full resolution, pan-sharpening is not necessary. Analogue to the line scanner data an automatic processing chain for UltraCamD and UltraCamX exists. The different systems have been flown for different types of applications; main fields of interest among others are environmental applications (flooding simulations, monitoring tasks, classification) and 3D-modelling (e.g. city mapping). From the DSM and TOM data Digital Terrain Models (DTM) and 3D city models are derived. Textures for the facades are taken from oblique orthoimages, which are created from the same input data as the TOM and the DOM. The resulting models are characterised by high geometric accuracy and the perfect fit of image data and DSM. The DLR is permanently developing and testing a wide range of sensor types and imaging platforms for terrestrial and space applications. The MFC-sensors have been flown in combination with laser systems and imaging spectrometers and special data fusion products have been developed. These products include hyperspectral orthoimages and 3D hyperspectral data.
Novel 3D imaging techniques for improved understanding of planetary surface geomorphology.
NASA Astrophysics Data System (ADS)
Muller, Jan-Peter
2015-04-01
Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the past decade for Mars and the Moon, especially in 3D imaging of surface shape (down to resolutions of 75cm) and subsequent correction for terrain relief of imagery from orbiting and co-registration of lander and rover robotic images. We present some of the recent highlights including 3D modelling of surface shape from the ESA Mars Express HRSC (High Resolution Stereo Camera), see [1], [2] at 30-100m grid-spacing; and then co-registered to HRSC using a resolution cascade of 20m DTMs from NASA MRO stereo-CTX and 0.75m DTMs from MRO stereo-HiRISE [3]. This has opened our eyes to the formation mechanisms of megaflooding events, such as the formation of Iani Vallis and the upstream blocky terrain, to crater lakes and receding valley cuts [4]. A comparable set of products is now available for the Moon from LROC-WA at 100m [5] and LROC-NA at 1m [6]. Recently, a very novel technique for the super-resolution restoration (SRR) of stacks of images has been developed at UCL [7]. First examples shown will be of the entire MER-A Spirit rover traverse taking a stack of 25cm HiRISE to generate a corridor of SRR images along the rover traverse of 5cm imagery of unresolved features such as rocks, created as a consequence of meteoritic bombardment, ridge and valley features. This SRR technique will allow us for ˜400 areas on Mars (where 5 or more HiRISE images have been captured) and similar numbers on the Moon to resolve sub-pixel features. Examples will be shown of how these SRR images can be employed to assist with the better understanding of surface geomorphology. Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under PRoViDE grant agreement n° 312377. Partial support is also provided from the STFC 'MSSL Consolidated Grant' ST/K000977/1. References: [1] Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; [2] Gwinner, K., F. et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). Geophysical Research Abstracts, Vol. 17, EGU2015-13832; [3] Kim, J., & Muller, J. (2009). Multi-resolution topographic data extraction from Martian stereo imagery. Planetary and Space Science, 57, 2095-2112. doi:10.1016/j.pss.2009.09.024; [4] Warner, N. H., Gupta, S., Kim, J.-R., Muller, J.-P., Le Corre, L., Morley, J., et al. (2011). Constraints on the origin and evolution of Iani Chaos, Mars. Journal of Geophysical Research, 116(E6), E06003. doi:10.1029/2010JE003787; [5] Fok, H. S., Shum, C. K., Yi, Y., Araki, H., Ping, J., Williams, J. G., et al. (2011). Accuracy assessment of lunar topography models. Earth Planets Space, 63, 15-23. doi:10.5047/eps.2010.08.005; [6] Haase, I., Oberst, J., Scholten, F., Wählisch, M., Gläser, P., Karachevtseva, I., & Robinson, M. S. (2012). Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography - Haase - 2012 - Journal of Geophysical Research: Planets (1991-2012). Journal of Geophysical Research, 117, E00H20. doi:10.1029/2011JE003908; [7] Tao, Y., Muller, J.-P. (2015) Supporting lander and rover operation: a novel super-resolution restoration technique. Geophysical Research Abstracts, Vol. 17, EGU2015-6925
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Schwartz, Stephen E.; Yu, Dantong
Clouds are a central focus of the U.S. Department of Energy (DOE)’s Atmospheric System Research (ASR) program and Atmospheric Radiation Measurement (ARM) Climate Research Facility, and more broadly are the subject of much investigation because of their important effects on atmospheric radiation and, through feedbacks, on climate sensitivity. Significant progress has been made by moving from a vertically pointing (“soda-straw”) to a three-dimensional (3D) view of clouds by investing in scanning cloud radars through the American Recovery and Reinvestment Act of 2009. Yet, because of the physical nature of radars, there are key gaps in ARM's cloud observational capabilities. Formore » example, cloud radars often fail to detect small shallow cumulus and thin cirrus clouds that are nonetheless radiatively important. Furthermore, it takes five to twenty minutes for a cloud radar to complete a 3D volume scan and clouds can evolve substantially during this period. Ground-based stereo-imaging is a promising technique to complement existing ARM cloud observation capabilities. It enables the estimation of cloud coverage, height, horizontal motion, morphology, and spatial arrangement over an extended area of up to 30 by 30 km at refresh rates greater than 1 Hz (Peng et al. 2015). With fine spatial and temporal resolution of modern sky cameras, the stereo-imaging technique allows for the tracking of a small cumulus cloud or a thin cirrus cloud that cannot be detected by a cloud radar. With support from the DOE SunShot Initiative, the Principal Investigator (PI)’s team at Brookhaven National Laboratory (BNL) has developed some initial capability for cloud tracking using multiple distinctly located hemispheric cameras (Peng et al. 2015). To validate the ground-based cloud stereo-imaging technique, the cloud stereo-imaging field campaign was conducted at the ARM Facility’s Southern Great Plains (SGP) site in Oklahoma from July 15 to December 24. As shown in Figure 1, the cloud stereo-imaging system consisted of two inexpensive high-definition (HD) hemispheric cameras (each cost less than $1,500) and ARM’s Total Sky Imager (TSI). Together with other co-located ARM instrumentation, the campaign provides a promising opportunity to validate stereo-imaging-based cloud base height and, more importantly, to examine the feasibility of cloud thickness retrieval for low-view-angle clouds.« less
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing
2015-01-01
The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
NASA Astrophysics Data System (ADS)
Caress, D. W.; Hobson, B.; Thomas, H. J.; Henthorn, R.; Martin, E. J.; Bird, L.; Rock, S. M.; Risi, M.; Padial, J. A.
2013-12-01
The Monterey Bay Aquarium Research Institute is developing a low altitude, high-resolution seafloor mapping capability that combines multibeam sonar with stereo photographic imagery. The goal is to obtain spatially quantitative, repeatable renderings of the seafloor with fidelity at scales of 5 cm or better from altitudes of 2-3 m. The initial test surveys using this sensor system are being conducted from a remotely operated vehicle (ROV). Ultimately we intend to field this survey system from an autonomous underwater vehicle (AUV). This presentation focuses on the current sensor configuration, methods for data processing, and results from recent test surveys. Bathymetry data are collected using a 400-kHz Reson 7125 multibeam sonar. This configuration produces 512 beams across a 135° wide swath; each beam has a 0.5° acrosstrack by 1.0° alongtrack angular width. At a 2-m altitude, the nadir beams have a 1.7-cm acrosstrack and 3.5 cm alongtrack footprint. Dual Allied Vision Technology GX1920 2.8 Mpixel color cameras provide color stereo photography of the seafloor. The camera housings have been fitted with corrective optics achieving a 90° field of view through a dome port. Illumination is provided by dual 100J xenon strobes. Position, depth, and attitude data are provided by a Kearfott SeaDevil Inertial Navigation System (INS) integrated with a 300 kHz RDI Doppler velocity log (DVL). A separate Paroscientific pressure sensor is mounted adjacent to the INS. The INS Kalman filter is aided by the DVL velocity and pressure data, achieving navigational drift rates less than 0.05% of the distance traveled during surveys. The sensors are mounted onto a toolsled fitted below MBARI's ROV Doc Ricketts with the sonars, cameras and strobes all pointed vertically down. During surveys the ROV flies at a 2-m altitude at speeds of 0.1-0.2 m/s. During a four-day R/V Western Flyer cruise in June 2013, we successfully collected multibeam and camera survey data from a 2-m altitude at three sites in the deep Monterey Canyon axis. The surveys lines were spaced 1.5-m and were flown at speeds of 0.1-0.2-m/s while the sonars pinged at 3 Hz and the cameras operated at 0.5 Hz. All three low-altitude surveys are at ~2850 m depth and lie within the 1-m lateral resolution bathymetry of a 2009, 50-m altitude MBARI Mapping AUV survey. Site 1 has the greatest topography, being centered on a 15 m diameter, 7 m high flat boulder surrounded by an 80 m diameter, 6 m deep scour pit. Site 2 is located within a field of ~3-m high apparent sediment waves with ~80-m wavelengths. Site 0 is flat and includes chemosynthetic clam communities. At a 2 m altitude, the multibeam bathymetry swath is more than 7 m wide and the camera images are 4 m wide. Following navigation adjustment to match features in overlapping bathymetry swaths, we achieve 5-cm lateral resolution topography overlain with ~1-mm scale photographic imagery.
Advances in detection of diffuse seafloor venting using structured light imaging.
NASA Astrophysics Data System (ADS)
Smart, C.; Roman, C.; Carey, S.
2016-12-01
Systematic, remote detection and high resolution mapping of low temperature diffuse hydrothermal venting is inefficient and not currently tractable using traditional remotely operated vehicle (ROV) mounted sensors. Preliminary results for hydrothermal vent detection using a structured light laser sensor were presented in 2011 and published in 2013 (Smart) with continual advancements occurring in the interim. As the structured light laser passes over active venting, the projected laser line effectively blurs due to the associated turbulence and density anomalies in the vent fluid. The degree laser disturbance is captured by a camera collecting images of the laser line at 20 Hz. Advancements in the detection of the laser and fluid interaction have included extensive normalization of the collected laser data and the implementation of a support vector machine algorithm to develop a classification routine. The image data collected over a hydrothermal vent field is then labeled as seafloor, bacteria or a location of venting. The results can then be correlated with stereo images, bathymetry and backscatter data. This sensor is a component of an ROV mounted imaging suite which also includes stereo cameras and a multibeam sonar system. Originally developed for bathymetric mapping, the structured light laser sensor, and other imaging suite components, are capable of creating visual and bathymetric maps with centimeter level resolution. Surveys are completed in a standard mowing the lawn pattern completing a 30m x 30m survey with centimeter level resolution in under an hour. Resulting co-registered data includes, multibeam and structured light laser bathymetry and backscatter, stereo images and vent detection. This system allows for efficient exploration of areas with diffuse and small point source hydrothermal venting increasing the effectiveness of scientific sampling and observation. Recent vent detection results collected during the 2013-2015 E/V Nautilus seasons will be presented. Smart, C. J. and Roman, C. and Carey, S. N. (2013) Detection of diffuse seafloor venting using structured light imaging, Geochemistry, Geophysics, Geosystems, 14, 4743-4757
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
The research on calibration methods of dual-CCD laser three-dimensional human face scanning system
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong
2013-09-01
In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Improved stereo matching applied to digitization of greenhouse plants
NASA Astrophysics Data System (ADS)
Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng
2015-03-01
The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.
Field Test of the ExoMars Panoramic Camera in the High Arctic - First Results and Lessons Learned
NASA Astrophysics Data System (ADS)
Schmitz, N.; Barnes, D.; Coates, A.; Griffiths, A.; Hauber, E.; Jaumann, R.; Michaelis, H.; Mosebach, H.; Paar, G.; Reissaus, P.; Trauthan, F.
2009-04-01
The ExoMars mission as the first element of the ESA Aurora program is scheduled to be launched to Mars in 2016. Part of the Pasteur Exobiology Payload onboard the ExoMars rover is a Panoramic Camera System (‘PanCam') being designed to obtain high-resolution color and wide-angle multi-spectral stereoscopic panoramic images from the mast of the ExoMars rover. The PanCam instrument consists of two wide-angle cameras (WACs), which will provide multispectral stereo images with 34° field-of-view (FOV) and a High-Resolution RGB Channel (HRC) to provide close-up images with 5° field-of-view. For field testing of the PanCam breadboard in a representative environment the ExoMars PanCam team joined the 6th Arctic Mars Analogue Svalbard Expedition (AMASE) 2008. The expedition took place from 4-17 August 2008 in the Svalbard archipelago, Norway, which is considered to be an excellent site, analogue to ancient Mars. 31 scientists and engineers involved in Mars Exploration (among them the ExoMars WISDOM, MIMA and Raman-LIBS team as well as several NASA MSL teams) combined their knowledge, instruments and techniques to study the geology, geophysics, biosignatures, and life forms that can be found in volcanic complexes, warm springs, subsurface ice, and sedimentary deposits. This work has been carried out by using instruments, a rover (NASA's CliffBot), and techniques that will/may be used in future planetary missions, thereby providing the capability to simulate a full mission environment in a Mars analogue terrain. Besides demonstrating PanCam's general functionality in a field environment, test and verification of the interpretability of PanCam data for in-situ geological context determination and scientific target selection was a main objective. To process the collected data, a first version of the preliminary PanCam 3D reconstruction processing & visualization chain was used. Other objectives included to test and refine the operational scenario (based on ExoMars Rover Reference Surface Mission), to investigate data commonalities and data fusion potential w.r.t. other instruments, and to collect representative image data to evaluate various influences, such as viewing distance, surface structure, and availability of structures at "infinity" (e.g. resolution, focus quality and associated accuracy of the 3D reconstruction). Airborne images with the HRSC-AX camera (airborne camera with heritage from the Mars Express High Resolution Stereo Camera HRSC), collected during a flight campaign over Svalbard in June 2008, provided large-scale geological context information for all field sites.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
NASA Astrophysics Data System (ADS)
Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian
2018-06-01
Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.
NASA Astrophysics Data System (ADS)
Schonlau, William J.
2006-05-01
An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
NASA Astrophysics Data System (ADS)
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
Modeling Floods in Athabasca Valles, Mars, Using CTX Stereo Topography
NASA Astrophysics Data System (ADS)
Dundas, C. M.; Keszthelyi, L. P.; Denlinger, R. P.; Thomas, O. H.; Galuszka, D.; Hare, T. M.; Kirk, R. L.; Howington-Kraus, E.; Rosiek, M.
2012-12-01
Among the most remarkable landforms on Mars are the outflow channels, which suggest the occurrence of catastrophic water floods in the past. Athabasca Valles has long been thought to be the youngest of these channels [1-2], although it has recently become clear that the young crater age applies to a coating lava flow [3]. Simulations with a 2.5-dimensional flood model have provided insight into the details of flood dynamics but have also demonstrated that the Digital Elevation Model (DEM) from the Mars Orbiter Laser Altimeter (MOLA) Mission Experiment Gridded Data Records includes significant artifacts at this latitude at the scales relevant for flood modeling [4]. In order to obtain improved topography, we processed stereo images from the Context Camera (CTX) of the Mars Reconnaissance Orbiter (MRO) using methods developed for producing topographic models of the Moon with images from the Lunar Reconnaissance Orbiter Camera, a derivative of the CTX camera. Some work on flood modeling with CTX stereo has been published by [5], but we will present several advances, including corrections to the published CTX optical distortion model and improved methods to combine the stereo and MOLA data. The limitations of current methods are the accuracy of control to MOLA and the level of error introduced when the MRO spacecraft is not in a high-stability mode during stereo imaging, leading to jitter impacting the derived topography. Construction of a mosaic of multiple stereo pairs, controlled to MOLA, allows us to consider flow through the cluster of streamlined islands in the upper part of the channel [6], including what is suggested to be the best example of flood-formed subaqueous dunes on Mars [7]. We will present results from running a flood model [4, 8] through the high-resolution (100 m/post) DEM covering the streamlined islands and subaqueous dunes, using results from a lower-resolution model as a guide to the inflow. By considering a range of flow levels below estimated peak flow, we can examine the flow behavior at the site of the apparent subaqueous dunes and, in particular, assess whether the flow in this area is uniquely conducive to the formation of such bedforms [e.g., 9]. [1] Berman D. C. and Hartmann W. K. (2002) Icarus 159, 1-17. [2] Burr D. M. et al. (2002) Icarus 159, 53-73. [3] Jaeger W. L. et al. (2010) Icarus 205, 230-243. [4] Keszthelyi L. P. et al. (2007) GRL 34, L21206. [5] McIntyre et al. (2012) JGR 117, E03009. [6] Burr D. (2005) Geomorphology 69, 242-252. [7] Burr D. M. et al. (2004) Icarus 171, 68-83. [8] Denlinger R. P. and O'Connell D. R. H. (2008) J. Hyd. Eng. 134, 1590-1602. [9] Kleinhans M. G. (2005) JGR 110, E12003.
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)
1999-01-01
A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
NASA Technical Reports Server (NTRS)
Barnes, J. C. (Principal Investigator); Smallwood, M. D.; Cogan, J. L.
1975-01-01
The author has identified the following significant results. Of the four black and white S190A camera stations, snowcover is best defined in the two visible spectral bands, due in part to their better resolution. The overall extent of the snow can be mapped more precisely, and the snow within shadow areas is better defined in the visible bands. Of the two S190A color products, the aerial color photography is the better. Because of the contrast in color between snow and snow-free terrain and the better resolution, this product is concluded to be the best overall of the six camera stations for detecting and mapping snow. Overlapping frames permit stereo viewing, which aids in distinguishing clouds from the underlying snow. Because of the greater spatial resolution of the S190B earth terrain camera, areal snow extent can be mapped in greater detail than from the S190A photographs. The snow line elevation measured from the S190A and S190B photographs is reasonable compared to the meager ground truth data available.
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
Three-camera stereo vision for intelligent transportation systems
NASA Astrophysics Data System (ADS)
Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.
1997-02-01
A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
A Regional View of the Libya Montes
NASA Technical Reports Server (NTRS)
2000-01-01
[figure removed for brevity, see original site]
The Libya Montes are a ring of mountains up-lifted by the giant impact that created the Isidis basin to the north. During 1999, this region became one of the top two that were being considered for the now-canceled Mars Surveyor 2001 Lander. The Isidis basin is very, very ancient. Thus, the mountains that form its rims would contain some of the oldest rocks available at the Martian surface, and a landing in this region might potentially provide information about conditions on early Mars. In May 1999, the wide angle cameras of the Mars Global Surveyor Mars Orbiter Camera system were used in what was called the 'Geodesy Campaign' to obtain nearly global maps of the planet in color and in stereo at resolutions of 240 m/pixel (787 ft/pixel) for the red camera and 480 m/pixel (1575 ft/pixel) for the blue. Shown here are color and stereo views constructed from mosaics of the Geodesy Campaign images for the Libya Montes region of Mars. After they formed by giant impact, the Libya Mountains and valleys were subsequently modified and eroded by other processes, including wind, impact cratering, and flow of liquid water to make the many small valleys that can be seen running northward in the scene. The pictures shown here cover nearly 122,000 square kilometers (47,000 square miles) between latitudes 0.1oN and 4.0oN, longitudes 271.5oW and 279.9oW. The mosaics are about 518 km (322 mi) wide by 235 km (146 mi)high. Red-blue '3-D' glasses are needed to view the stereo image.Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
Panoramic 3d Vision on the ExoMars Rover
NASA Astrophysics Data System (ADS)
Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.
The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w.r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance of radiometric & spectral response in various spectral bands, fusion between HRC and WAC, super resolution, and again the requirement of quick yet robust processing will add some complexity to the ground processing system. • Visualization for Operations Planning: Efficient operations planning is directly related to an ergonomic and well performing visualization. It is intended to adapt existing tools to an integrated visualization solution for the purpose of scientific site characterization, view planning and reachability mapping/instrument placement of pointing sensors (including the panoramic imaging system itself), and selection of regions of interest. The main interfaces between the individual components as well as the first version of a user requirement document are currently under definition. Beside the support for sensor layout and calibration the 3d vision system will consist of 2-3 main modules to be used during ground processing & utilization of the ExoMars Rover panoramic imaging system. 3
Coregistration of high-resolution Mars orbital images
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter
2015-04-01
The systematic orbital imaging of the Martian surface started 4 decades ago from NASA's Viking Orbiter 1 & 2 missions, which were launched in August 1975, and acquired orbital images of the planet between 1976 and 1980. The result of this reconnaissance was the first medium-resolution (i.e. ≤ 300m/pixel) global map of Mars, as well as a variety of high-resolution images (reaching up to 8m/pixel) of special regions of interest. Over the last two decades NASA has sent 3 more spacecraft with onboard instruments for high-resolution orbital imaging: Mars Global Surveyor (MGS) having onboard the Mars Orbital Camera - Narrow Angle (MOC-NA), Mars Odyssey having onboard the Thermal Emission Imaging System - Visual (THEMIS-VIS) and the Mars Reconnaissance Orbiter (MRO) having on board two distinct high-resolution cameras, Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE). Moreover, ESA has the multispectral High resolution Stereo Camera (HRSC) onboard ESA's Mars Express with resolution up to 12.5m since 2004. Overall, this set of cameras have acquired more than 400,000 high-resolution images, i.e. with resolution better than 100m and as fine as 25 cm/pixel. Notwithstanding the high spatial resolution of the available NASA orbital products, their accuracy of areo-referencing is often very poor. As a matter of fact, due to pointing inconsistencies, usually form errors in roll attitude, the acquired products may actually image areas tens of kilometers far away from the point that they are supposed to be looking at. On the other hand, since 2004, the ESA Mars Express has been acquiring stereo images through the High Resolution Stereo Camera (HRSC), with resolution that is usually 12.5-25 metres per pixel. The achieved coverage is more than 64% for images with resolution finer than 20 m/pixel, while for ~40% of Mars, Digital Terrain Models (DTMs) have been produced with are co-registered with MOLA [Gwinner et al., 2010]. The HRSC images and DTMs represent the best available 3D reference frame for Mars showing co-registration with MOLA<25m (loc.cit.). In our work, the reference generated by HRSC terrain corrected orthorectified images is used as a common reference frame to co-register all available high-resolution orbital NASA products into a common 3D coordinate system, thus allowing the examination of the changes that happen on the surface of Mars over time (such as seasonal flows [McEwen et al., 2011] or new impact craters [Byrne, et al., 2009]). In order to accomplish such a tedious manual task, we have developed an automatic co-registration pipeline that produces orthorectified versions of the NASA images in realistic time (i.e. from ~15 minutes to 10 hours per image depending on size). In the first step of this pipeline, tie-points are extracted from the target NASA image and the reference HRSC image or image mosaic. Subsequently, the HRSC areo-reference information is used to transform the HRSC tie-points pixel coordinates into 3D "world" coordinates. This way, a correspondence between the pixel coordinates of the target NASA image and the 3D "world" coordinates is established for each tie-point. This set of correspondences is used to estimate a non-rigid, 3D to 2D transformation model, which transforms the target image into the HRSC reference coordinate system. Finally, correlation of the transformed target image and the HRSC image is employed to fine-tune the orthorectification results, thus generating results with sub-pixel accuracy. This method, which has been proven to be accurate, robust to resolution differences and reliable when dealing with partially degraded data and fast, will be presented, along with some example co-registration results that have been achieved by using it. Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] K. F. Gwinner, et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007. [2] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science , 333 (6043): 740-743. [3] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676.
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
Application of Stereo Vision to the Reconnection Scaling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.
The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less
Sensor fusion and augmented reality with the SAFIRE system
NASA Astrophysics Data System (ADS)
Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra
2018-04-01
The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.
Rapid-Response or Repeat-Mode Topography from Aerial Structure from Motion
NASA Astrophysics Data System (ADS)
Nissen, E.; Johnson, K. L.; Fitzgerald, F. S.; Morgan, M.; White, J.
2014-12-01
This decade has seen a surge of interest in Structure-from-Motion (SfM) as a means of generating high-resolution topography and coregistered texture maps from stereo digital photographs. Using an unstructured set of overlapping photographs captured from multiple viewpoints and minimal GPS ground control, SfM solves simultaneously for scene topography and camera positions, orientations and lens parameters. The use of cheap unmanned aerial vehicles or tethered helium balloons as camera platforms expedites data collection and overcomes many of the cost, time and logistical limitations of LiDAR surveying, making it a potentially valuable tool for rapid response mapping and repeat monitoring applications. We begin this presentation by assessing what data resolutions and precisions are achievable using a simple aerial camera platform and commercial SfM software (we use the popular Agisoft Photoscan package). SfM point clouds generated at two small (~0.1 km2), sparsely-vegetated field sites in California compare favorably with overlapping airborne and terrestrial LiDAR surveys, with closest point distances of a few centimeters between the independent datasets. Next, we go on to explore the method in more challenging conditions, in response to a major landslide in Mesa County, Colorado, on 25th May 2014. Photographs collected from a small UAV were used to generate a high-resolution model of the 4.5 x 1 km landslide several days before an airborne LiDAR survey could be organized and flown. An initial estimate of the mass balance of the landslide could quickly be made by differencing this model against pre-event topography generated using stereo photographs collected in 2009 as part of the National Agricultural Imagery Program (NAIP). This case study therefore demonstrates the rich potential offered by this technique, as well as some of the challenges, particularly with respect to the treatment of vegetation.
Evaluating planetary digital terrain models-The HRSC DTM test
Heipke, C.; Oberst, J.; Albertz, J.; Attwenger, M.; Dorninger, P.; Dorrer, E.; Ewe, M.; Gehrke, S.; Gwinner, K.; Hirschmuller, H.; Kim, J.R.; Kirk, R.L.; Mayer, H.; Muller, Jan-Peter; Rengarajan, R.; Rentsch, M.; Schmidt, R.; Scholten, F.; Shan, J.; Spiegel, M.; Wahlisch, M.; Neukum, G.
2007-01-01
The High Resolution Stereo Camera (HRSC) has been orbiting the planet Mars since January 2004 onboard the European Space Agency (ESA) Mars Express mission and delivers imagery which is being used for topographic mapping of the planet. The HRSC team has conducted a systematic inter-comparison of different alternatives for the production of high resolution digital terrain models (DTMs) from the multi look HRSC push broom imagery. Based on carefully chosen test sites the test participants have produced DTMs which have been subsequently analysed in a quantitative and a qualitative manner. This paper reports on the results obtained in this test. ?? 2007 Elsevier Ltd. All rights reserved.
The PanCam Instrument for the ExoMars Rover
NASA Astrophysics Data System (ADS)
Coates, A. J.; Jaumann, R.; Griffiths, A. D.; Leff, C. E.; Schmitz, N.; Josset, J.-L.; Paar, G.; Gunn, M.; Hauber, E.; Cousins, C. R.; Cross, R. E.; Grindrod, P.; Bridges, J. C.; Balme, M.; Gupta, S.; Crawford, I. A.; Irwin, P.; Stabbins, R.; Tirsch, D.; Vago, J. L.; Theodorou, T.; Caballo-Perucha, M.; Osinski, G. R.; PanCam Team
2017-07-01
The scientific objectives of the ExoMars rover are designed to answer several key questions in the search for life on Mars. In particular, the unique subsurface drill will address some of these, such as the possible existence and stability of subsurface organics. PanCam will establish the surface geological and morphological context for the mission, working in collaboration with other context instruments. Here, we describe the PanCam scientific objectives in geology, atmospheric science, and 3-D vision. We discuss the design of PanCam, which includes a stereo pair of Wide Angle Cameras (WACs), each of which has an 11-position filter wheel and a High Resolution Camera (HRC) for high-resolution investigations of rock texture at a distance. The cameras and electronics are housed in an optical bench that provides the mechanical interface to the rover mast and a planetary protection barrier. The electronic interface is via the PanCam Interface Unit (PIU), and power conditioning is via a DC-DC converter. PanCam also includes a calibration target mounted on the rover deck for radiometric calibration, fiducial markers for geometric calibration, and a rover inspection mirror.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
NASA Astrophysics Data System (ADS)
Mercer, Jason J.; Westbrook, Cherie J.
2016-11-01
Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.
Visual homing with a pan-tilt based stereo camera
NASA Astrophysics Data System (ADS)
Nirmal, Paramesh; Lyons, Damian M.
2013-01-01
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; Macleod, Todd; Gagliano, Larry
2015-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus
2008-03-01
Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; MacLeod, Todd; Gagliano, Larry
2016-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system
NASA Astrophysics Data System (ADS)
Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng
2009-02-01
This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
NASA Astrophysics Data System (ADS)
Blaser, S.; Nebiker, S.; Cavegn, S.
2017-05-01
Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
A high resolution and high speed 3D imaging system and its application on ATR
NASA Astrophysics Data System (ADS)
Lu, Thomas T.; Chao, Tien-Hsin
2006-04-01
The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.
NASA Astrophysics Data System (ADS)
Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan
2017-12-01
Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garfield, B.R.; Rendell, J.T.
1991-01-01
The present conference discusses the application of schlieren photography in industry, laser fiber-optic high speed photography, holographic visualization of hypervelocity explosions, sub-100-picosec X-ray grating cameras, flash soft X-radiography, a novel approach to synchroballistic photography, a programmable image converter framing camera, high speed readout CCDs, an ultrafast optomechanical camera, a femtosec streak tube, a modular streak camera for laser ranging, and human-movement analysis with real-time imaging. Also discussed are high-speed photography of high-resolution moire patterns, a 2D electron-bombarded CCD readout for picosec electrooptical data, laser-generated plasma X-ray diagnostics, 3D shape restoration with virtual grating phase detection, Cu vapor lasers for highmore » speed photography, a two-frequency picosec laser with electrooptical feedback, the conversion of schlieren systems to high speed interferometers, laser-induced cavitation bubbles, stereo holographic cinematography, a gatable photonic detector, and laser generation of Stoneley waves at liquid-solid boundaries.« less
NASA Astrophysics Data System (ADS)
Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Takeuchi, K.; Okochi, H.; Ogata, H.; Kuroshima, H.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Adachi, S.; Uchiyama, T.; Suzuki, H.
2014-11-01
After the nuclear disaster in Fukushima, radiation decontamination has become particularly urgent. To help identify radiation hotspots and ensure effective decontamination operation, we have developed a novel Compton camera based on Ce-doped Gd3Al2Ga3O12 scintillators and multi-pixel photon counter (MPPC) arrays. Even though its sensitivity is several times better than that of other cameras being tested in Fukushima, we introduce a depth-of-interaction (DOI) method to further improve the angular resolution. For gamma rays, the DOI information, in addition to 2-D position, is obtained by measuring the pulse-height ratio of the MPPC arrays coupled to ends of the scintillator. We present the detailed performance and results of various field tests conducted in Fukushima with the prototype 2-D and DOI Compton cameras. Moreover, we demonstrate stereo measurement of gamma rays that enables measurement of not only direction but also approximate distance to radioactive hotspots.
Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images
NASA Technical Reports Server (NTRS)
Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.
2011-01-01
A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.
NASA Astrophysics Data System (ADS)
Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2014-03-01
Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.
GPU-based real-time trinocular stereo vision
NASA Astrophysics Data System (ADS)
Yao, Yuanbin; Linton, R. J.; Padir, Taskin
2013-01-01
Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.
Ames Stereo Pipeline for Operation IceBridge
NASA Astrophysics Data System (ADS)
Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.
2017-12-01
We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.
Using virtual reality for science mission planning: A Mars Pathfinder case
NASA Technical Reports Server (NTRS)
Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.
1994-01-01
NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.
Analysis of Camera Arrays Applicable to the Internet of Things.
Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing
2016-03-22
The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.
Researches on hazard avoidance cameras calibration of Lunar Rover
NASA Astrophysics Data System (ADS)
Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong
2017-11-01
Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.
Diving-flight aerodynamics of a peregrine falcon (Falco peregrinus).
Ponitz, Benjamin; Schmitz, Anke; Fischer, Dominik; Bleckmann, Horst; Brücker, Christoph
2014-01-01
This study investigates the aerodynamics of the falcon Falco peregrinus while diving. During a dive peregrines can reach velocities of more than 320 km h⁻¹. Unfortunately, in freely roaming falcons, these high velocities prohibit a precise determination of flight parameters such as velocity and acceleration as well as body shape and wing contour. Therefore, individual F. peregrinus were trained to dive in front of a vertical dam with a height of 60 m. The presence of a well-defined background allowed us to reconstruct the flight path and the body shape of the falcon during certain flight phases. Flight trajectories were obtained with a stereo high-speed camera system. In addition, body images of the falcon were taken from two perspectives with a high-resolution digital camera. The dam allowed us to match the high-resolution images obtained from the digital camera with the corresponding images taken with the high-speed cameras. Using these data we built a life-size model of F. peregrinus and used it to measure the drag and lift forces in a wind-tunnel. We compared these forces acting on the model with the data obtained from the 3-D flight path trajectory of the diving F. peregrinus. Visualizations of the flow in the wind-tunnel uncovered details of the flow structure around the falcon's body, which suggests local regions with separation of flow. High-resolution pictures of the diving peregrine indicate that feathers pop-up in the equivalent regions, where flow separation in the model falcon occurred.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
Parallel-Processing Software for Correlating Stereo Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric
2007-01-01
A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
The PanCam Instrument for the ExoMars Rover
Coates, A.J.; Jaumann, R.; Griffiths, A.D.; Leff, C.E.; Schmitz, N.; Josset, J.-L.; Paar, G.; Gunn, M.; Hauber, E.; Cousins, C.R.; Cross, R.E.; Grindrod, P.; Bridges, J.C.; Balme, M.; Gupta, S.; Crawford, I.A.; Irwin, P.; Stabbins, R.; Tirsch, D.; Vago, J.L.; Theodorou, T.; Caballo-Perucha, M.; Osinski, G.R.
2017-01-01
Abstract The scientific objectives of the ExoMars rover are designed to answer several key questions in the search for life on Mars. In particular, the unique subsurface drill will address some of these, such as the possible existence and stability of subsurface organics. PanCam will establish the surface geological and morphological context for the mission, working in collaboration with other context instruments. Here, we describe the PanCam scientific objectives in geology, atmospheric science, and 3-D vision. We discuss the design of PanCam, which includes a stereo pair of Wide Angle Cameras (WACs), each of which has an 11-position filter wheel and a High Resolution Camera (HRC) for high-resolution investigations of rock texture at a distance. The cameras and electronics are housed in an optical bench that provides the mechanical interface to the rover mast and a planetary protection barrier. The electronic interface is via the PanCam Interface Unit (PIU), and power conditioning is via a DC-DC converter. PanCam also includes a calibration target mounted on the rover deck for radiometric calibration, fiducial markers for geometric calibration, and a rover inspection mirror. Key Words: Mars—ExoMars—Instrumentation—Geology—Atmosphere—Exobiology—Context. Astrobiology 17, 511–541.
NASA Technical Reports Server (NTRS)
Hertel, R. J.
1979-01-01
An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.
LWIR passive perception system for stealthy unmanned ground vehicle night operations
NASA Astrophysics Data System (ADS)
Lee, Daren; Rankin, Arturo; Huertas, Andres; Nash, Jeremy; Ahuja, Gaurav; Matthies, Larry
2016-05-01
Resupplying forward-deployed units in rugged terrain in the presence of hostile forces creates a high threat to manned air and ground vehicles. An autonomous unmanned ground vehicle (UGV) capable of navigating stealthily at night in off-road and on-road terrain could significantly increase the safety and success rate of such resupply missions for warfighters. Passive night-time perception of terrain and obstacle features is a vital requirement for such missions. As part of the ONR 30 Autonomy Team, the Jet Propulsion Laboratory developed a passive, low-cost night-time perception system under the ONR Expeditionary Maneuver Warfare and Combating Terrorism Applied Research program. Using a stereo pair of forward looking LWIR uncooled microbolometer cameras, the perception system generates disparity maps using a local window-based stereo correlator to achieve real-time performance while maintaining low power consumption. To overcome the lower signal-to-noise ratio and spatial resolution of LWIR thermal imaging technologies, a series of pre-filters were applied to the input images to increase the image contrast and stereo correlator enhancements were applied to increase the disparity density. To overcome false positives generated by mixed pixels, noisy disparities from repeated textures, and uncertainty in far range measurements, a series of consistency, multi-resolution, and temporal based post-filters were employed to improve the fidelity of the output range measurements. The stereo processing leverages multi-core processors and runs under the Robot Operating System (ROS). The night-time passive perception system was tested and evaluated on fully autonomous testbed ground vehicles at SPAWAR Systems Center Pacific (SSC Pacific) and Marine Corps Base Camp Pendleton, California. This paper describes the challenges, techniques, and experimental results of developing a passive, low-cost perception system for night-time autonomous navigation.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
NASA Astrophysics Data System (ADS)
Bagnardi, M.; González, P. J.; Hooper, A. J.; Richter, N.; Walter, T. R.
2016-12-01
Precise, quantitative analyses of topographic changes associated with the emplacement of volcanic products provide the means to infer key parameters for the assessment of hazards associated with volcanic processes. Different techniques can be applied to generate high-resolution digital elevation models (DEMs), using both ground-based and air/space-borne sensors. In this study, we first evaluate the use of very high resolution (VHR) tri-stereo optical imagery from the Pleiades-1 satellite constellation for volcanological applications. With this scope, we generate a 1 m resolution DEM of Fogo Volcano, Cape Verde, and use this DEM to quantify topographic changes associated with the 2014-2015 eruption. We observe that, when compared with the classic stereo approach, the use of tri-stereo imagery highly enhances the ability of photogrammetric techniques to estimate heights through increasing the point cloud density and by reducing the number of pixels with no measurements. From the Pleiades-1 post-eruption topography we subtract heights from a pre-eruptive DEM, obtained using spaceborne synthetic aperture radar (SAR) data from the TanDEM-X mission, and estimate the volume of the 2014-2015 lava flow ( 46 million m3) and the mean output rate throughout the eruption (5-7 m3/s). We subsequently use complementary datasets from a variety of sensors (Terrestrial Laser Scanning, UAV optical imagery, Structure from Motion from hand-held DSLR cameras) to fill gaps in Pleiades-1 data coverage and to generate a merged, high-resolution DEM of the volcano. To weight the contribution of each dataset, we carry out a comparative analysis of the accuracy of the different DEMs and identify advantages and disadvantages associated with the use of each technique. Finally, using SAR data acquired by the Sentinel-1a satellite, we apply SAR interferometry (InSAR) and measure the lava flow subsidence due to cooling and contraction in the months after its emplacement and compare this to the measured lava flow thickness. Maximum subsidence is recorded in those areas where lava flow thickness is also maximum, and where the substrate onto which the lava flow was emplaced is highly compactable.
High resolution hybrid optical and acoustic sea floor maps (Invited)
NASA Astrophysics Data System (ADS)
Roman, C.; Inglis, G.
2013-12-01
This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.
An overview of the stereo correlation and triangulation formulations used in DICe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Daniel Z.
This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.
Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.
2003-01-01
We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.
a Photogrammetric Pipeline for the 3d Reconstruction of Cassis Images on Board Exomars Tgo
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Mudric, T.; Pommerol, A.; Thomas, N.; Cremonese, G.
2017-07-01
CaSSIS (Colour and Stereo Surface Imaging System) is the stereo imaging system onboard the European Space Agency and ROSCOSMOS ExoMars Trace Gas Orbiter (TGO) that has been launched on 14 March 2016 and entered a Mars elliptical orbit on 19 October 2016. During the first bounded orbits, CaSSIS returned its first multiband images taken on 22 and 26 November 2016. The telescope acquired 11 images, each composed by 30 framelets, of the Martian surface near Hebes Chasma and Noctis Labyrithus regions reaching at closest approach at a distance of 250 km from the surface. Despite of the eccentricity of this first orbit, CaSSIS has provided one stereo pair with a mean ground resolution of 6 m from a mean distance of 520 km. The team at the Astronomical Observatory of Padova (OAPD-INAF) is involved into different stereo oriented missions and it is realizing a software for the generation of Digital Terrain Models from the CaSSIS images. The SW will be then adapted also for other projects involving stereo camera systems. To compute accurate 3D models, several sequential methods and tools have been developed. The preliminary pipeline provides: the generation of rectified images from the CaSSIS framelets, a matching core and post-processing methods. The software includes in particular: an automatic tie points detection by the Speeded Up Robust Features (SURF) operator, an initial search for the correspondences through Normalize Cross Correlation (NCC) algorithm and the Adaptive Least Square Matching (LSM) algorithm in a hierarchical approach. This work will show a preliminary DTM generated by the first CaSSIS stereo images.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
NASA Astrophysics Data System (ADS)
Reiss, D.; Zanetti, M.; Neukum, G.
2011-09-01
Active dust devils were observed in Syria Planum in Mars Observer Camera - Wide Angle (MOC-WA) and High Resolution Stereo Camera (HRSC) imagery acquired on the same day with a time delay of ˜26 min. The unique operating technique of the HRSC allowed the measurement of the traverse velocities and directions of motion. Large dust devils observed in the HRSC image could be retraced to their counterparts in the earlier acquired MOC-WA image. Minimum lifetimes of three large (avg. ˜700 m in diameter) dust devils are ˜26 min, as inferred from retracing. For one of these large dust devil (˜820 m in diameter) it was possible to calculate a minimum lifetime of ˜74 min based on the measured horizontal speed and the length of its associated dust devil track. The comparison of our minimum lifetimes with previous published results of minimum and average lifetimes of small (˜19 m in diameter, avg. min. lifetime of ˜2.83 min) and medium (˜185 m in diameter, avg. min. lifetime of ˜13 min) dust devils imply that larger dust devils on Mars are active for much longer periods of time than smaller ones, as it is the case for terrestrial dust devils. Knowledge of martian dust devil lifetimes is an important parameter for the calculation of dust lifting rates. Estimates of the contribution of large dust devils (>300-1000 m in diameter) indicate that they may contribute, at least regionally, to ˜50% of dust entrainment by dust devils into the atmosphere compared to the dust devils <300 m in diameter given that the size-frequency distribution follows a power-law. Although large dust devils occur relatively rarely and the sediment fluxes are probably lower compared to smaller dust devils, their contribution to the background dust opacity by dust devils on Mars could be at least regionally large due to their longer lifetimes and ability of dust lifting into high atmospheric layers.
a Cost-Effective Method for Crack Detection and Measurement on Concrete Surface
NASA Astrophysics Data System (ADS)
Sarker, M. M.; Ali, T. A.; Abdelfatah, A.; Yehia, S.; Elaksher, A.
2017-11-01
Crack detection and measurement in the surface of concrete structures is currently carried out manually or through Non-Destructive Testing (NDT) such as imaging or scanning. The recent developments in depth (stereo) cameras have presented an opportunity for cost-effective, reliable crack detection and measurement. This study aimed at evaluating the feasibility of the new inexpensive depth camera (ZED) for crack detection and measurement. This depth camera with its lightweight and portable nature produces a 3D data file of the imaged surface. The ZED camera was utilized to image a concrete surface and the 3D file was processed to detect and analyse cracks. This article describes the outcome of the experiment carried out with the ZED camera as well as the processing tools used for crack detection and analysis. Crack properties that were also of interest were length, orientation, and width. The use of the ZED camera allowed for distinction between surface and concrete cracks. The ZED high-resolution capability and point cloud capture technology helped in generating a dense 3D data in low-lighting conditions. The results showed the ability of the ZED camera to capture the crack depth changes between surface (render) cracks, and crack that form in the concrete itself.
MRO High Resolution Imaging Science Experiment (HiRISE): Instrument Development
NASA Technical Reports Server (NTRS)
Delamere, Alan; Becker, Ira; Bergstrom, Jim; Burkepile, Jon; Day, Joe; Dorn, David; Gallagher, Dennis; Hamp, Charlie; Lasco, Jeffrey; Meiers, Bill
2003-01-01
The primary functional requirement of the HiRISE imager is to allow identification of both predicted and unknown features on the surface of Mars to a much finer resolution and contrast than previously possible. This results in a camera with a very wide swath width, 6km at 300km altitude, and a high signal to noise ratio, >100:1. Generation of terrain maps, 30 cm vertical resolution, from stereo images requires very accurate geometric calibration. The project limitations of mass, cost and schedule make the development challenging. In addition, the spacecraft stability must not be a major limitation to image quality. The nominal orbit for the science phase of the mission is a 3pm orbit of 255 by 320 km with periapsis locked to the south pole. The track velocity is approximately 3,400 m/s.
Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor
Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio
2011-01-01
This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016
The investigation of improved SHARAD profiles over Martian lobate debris aprons
NASA Astrophysics Data System (ADS)
Kim, J.; Baik, H. S.
2016-12-01
The Shallow Subsurface Radar (SHARAD), a radar sounding radar on the Mars Reconnaissance Orbiter has produced high valuable information concerning subsurface of Mars. It has been successfully used to observe complicate substructures of Mars such as polar deposit, pedestal crater and the other geomorphic features involving possible subsurface ice body. In this study, we summarized all SHARAD profiles over Martian Lobate debris aprons (LDAs) where significant arguments about their origins are undergoing. To make clear result, we used radon transformation for noise filtering. Also, we tried the clutter simulation on our target's Digital elevation model(DEM) produced by High Resolution Stereo Camera(HRSC) of Mars Express; As the comparison results between noise-removed SHARAD profile and clutter simulation, layers were able to be more clearly identified at many LDAs. We integrated our SHARAD profiles over all mid latitude LDAs into GIS. These will be demonstrated together with several radargram structures. However, it appeared the discontinuities over SHARAD profile result is not sufficient to be a clue of its origin. Thus the intensive interpretations employing thermal inertia, high resolution topographic profile with CTX and HiRISE stereo DTM altogether will be further conducted.
NASA Astrophysics Data System (ADS)
Traxler, Christoph; Ortner, Thomas; Hesina, Gerd; Barnes, Robert; Gupta, Sanjeev; Paar, Gerhard
2017-04-01
High resolution Digital Terrain Models (DTM) and Digital Outcrop Models (DOM) are highly useful for geological analysis and mission planning in planetary rover missions. PRo3D, developed as part of the EU-FP7 PRoViDE project, is a 3D viewer in which orbital DTMs and DOMs derived from rover stereo imagery can be rendered in a virtual environment for exploration and analysis. It allows fluent navigation over planetary surface models and provides a variety of measurement and annotation tools to complete an extensive geological interpretation. A key aspect of the image collection during planetary rover missions is determining the optimal viewing positions of rover instruments from different positions ('wide baseline stereo'). For the collection of high quality panoramas and stereo imagery the visibility of regions of interest from those positions, and the amount of common features shared by each stereo-pair, or image bundle is crucial. The creation of a highly accurate and reliable 3D surface, in the form of an Ordered Point Cloud (OPC), of the planetary surface, with a low rate of error and a minimum of artefacts, is greatly enhanced by using images that share a high amount of features and a sufficient overlap for wide baseline stereo or target selection. To support users in the selection of adequate viewpoints an interactive View Planner was integrated into PRo3D. The users choose from a set of different rovers and their respective instruments. PRo3D supports for instance the PanCam instrument of ESA's ExoMars 2020 rover mission or the Mastcam-Z camera of NASA's Mars2020 mission. The View Planner uses a DTM obtained from orbiter imagery, which can also be complemented with rover-derived DOMs as the mission progresses. The selected rover is placed onto a position on the terrain - interactively or using the current rover pose as known from the mission. The rover's base polygon and its local coordinate axes, and the chosen instrument's up- and forward vectors are visualised. The parameters of the instrument's pan and tilt unit (PTU) can be altered via the user interface, or alternatively calculated by selecting a target point on the visualised DTM. In the 3D view, the visible region of the planetary surface, resulting from these settings and the camera field-of-view is visualised by a highlighted region with a red border, representing the instruments footprint. The camera view is simulated and rendered in a separate window and PTU parameters can be interactively adjusted, allowing viewpoints, directions, and the expected image to be visualised in real-time in order to allow users the fine-tuning of these settings. In this way, ideal viewpoints and PTU settings for various rover models and instruments can efficiently be defined, resulting in an optimum imagery of the regions of interest.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
Diving-Flight Aerodynamics of a Peregrine Falcon (Falco peregrinus)
Ponitz, Benjamin; Schmitz, Anke; Fischer, Dominik; Bleckmann, Horst; Brücker, Christoph
2014-01-01
This study investigates the aerodynamics of the falcon Falco peregrinus while diving. During a dive peregrines can reach velocities of more than 320 km h−1. Unfortunately, in freely roaming falcons, these high velocities prohibit a precise determination of flight parameters such as velocity and acceleration as well as body shape and wing contour. Therefore, individual F. peregrinus were trained to dive in front of a vertical dam with a height of 60 m. The presence of a well-defined background allowed us to reconstruct the flight path and the body shape of the falcon during certain flight phases. Flight trajectories were obtained with a stereo high-speed camera system. In addition, body images of the falcon were taken from two perspectives with a high-resolution digital camera. The dam allowed us to match the high-resolution images obtained from the digital camera with the corresponding images taken with the high-speed cameras. Using these data we built a life-size model of F. peregrinus and used it to measure the drag and lift forces in a wind-tunnel. We compared these forces acting on the model with the data obtained from the 3-D flight path trajectory of the diving F. peregrinus. Visualizations of the flow in the wind-tunnel uncovered details of the flow structure around the falcon’s body, which suggests local regions with separation of flow. High-resolution pictures of the diving peregrine indicate that feathers pop-up in the equivalent regions, where flow separation in the model falcon occurred. PMID:24505258
A multi-modal stereo microscope based on a spatial light modulator.
Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J
2013-07-15
Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.
NASA Astrophysics Data System (ADS)
Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian
2014-05-01
Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC camera experiment. Press and media appearances of group members, and talks to school classes and interested communities also contribute to the public outreach. For HRSC data dissemination we use digital platforms. Since 2007 HRSC image data can be viewed and accessed via the online interface HRSCview [http://hrscview.fu-berlin.de] which was built in cooperation with the DLR Institute for Planetary Research. Additionally HRSC ortho images (level 4) are presented in a modern MapServer setup in GIS-read format since 2013 [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/projects/marsexpress/level4downloads/index.html]. All of these offers ensured the accessibility of HRSC data and products to the science community as well as to the general public for the last ten years and will do so also in the future, taking advantage of modern and user-optimized applications and networks.
Imager for Mars Pathfinder (IMF)
NASA Technical Reports Server (NTRS)
Smith, Peter H.
1994-01-01
The IMP camera is a near-surface sensing experiment with many capabilities beyond those normally associated with an imager. It is fully pointable in both elevation and azimuth with a protected, stowed position looking straight down. Stereo separation is provided with two optical paths; each has a 12-position filter wheel. The primary function of the camera, strongly tied to mission success, is to take a color panorama of the surrounding terrain. IMP requires approximately 120 images to give a complete downward hemisphere from the deployed position. IMP provides the geologist, and everyone else, a view of the local morphology with millimeter-tometer-scale resolution over a broad area. In addition to the general morphology of the scale, IMP has a large compliment of specially chosen filters to aid in both the identification of the mineral types and their degree of weathering.
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
Global lunar-surface mapping experiment using the Lunar Imager/Spectrometer on SELENE
NASA Astrophysics Data System (ADS)
Haruyama, Junichi; Matsunaga, Tsuneo; Ohtake, Makiko; Morota, Tomokatsu; Honda, Chikatoshi; Yokota, Yasuhiro; Torii, Masaya; Ogawa, Yoshiko
2008-04-01
The Moon is the nearest celestial body to the Earth. Understanding the Moon is the most important issue confronting geosciences and planetary sciences. Japan will launch the lunar polar orbiter SELENE (Kaguya) (Kato et al., 2007) in 2007 as the first mission of the Japanese long-term lunar exploration program and acquire data for scientific knowledge and possible utilization of the Moon. An optical sensing instrument called the Lunar Imager/Spectrometer (LISM) is loaded on SELENE. The LISM requirements for the SELENE project are intended to provide high-resolution digital imagery and spectroscopic data for the entire lunar surface, acquiring these data for scientific knowledge and possible utilization of the Moon. Actually, LISM was designed to include three specialized sub-instruments: a terrain camera (TC), a multi-band imager (MI), and a spectral profiler (SP). The TC is a high-resolution stereo camera with 10-m spatial resolution from a SELENE nominal altitude of 100 km and a stereo angle of 30° to provide stereo pairs from which digital terrain models (DTMs) with a height resolution of 20 m or better will be produced. The MI is a multi-spectral imager with four and five color bands with 20 m and 60 m spatial resolution in visible and near-infrared ranges, which will provide data to be used to distinguish the geological units in detail. The SP is a line spectral profiler with a 400-m-wide footprint and 300 spectral bands with 6-8 nm spectral resolution in the visible to near-infrared ranges. The SP data will be sufficiently powerful to identify the lunar surface's mineral composition. Moreover, LISM will provide data with a spatial resolution, signal-to-noise ratio, and covered spectral range superior to that of past Earth-based and spacecraft-based observations. In addition to the hardware instrumentation, we have studied operation plans for global data acquisition within the limited total data volume allotment per day. Results show that the TC and MI can achieve global observations within the restrictions by sharing the TC and MI observation periods, adopting appropriate data compression, and executing necessary SELENE orbital plane change operations to ensure global coverage by MI. Pre-launch operation planning has resulted in possible global TC high-contrast imagery, TC stereoscopic imagery, and MI 9-band imagery in one nominal mission period. The SP will also acquire spectral line profiling data for nearly the entire lunar surface. The east-west interval of the SP strip data will be 3-4 km at the equator by the end of the mission and shorter at higher latitudes. We have proposed execution of SELENE roll cant operations three times during the nominal mission period to execute calibration site observations, and have reached agreement on this matter with the SELENE project. We present LISM global surface mapping experiments for instrumentation and operation plans. The ground processing systems and the data release plan for LISM data are discussed briefly.
Przemyslaw, Baranski; Pawel, Strumillo
2012-01-01
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321
Stereo pair design for cameras with a fovea
NASA Technical Reports Server (NTRS)
Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.
1992-01-01
We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
Slant-hole collimator, dual mode sterotactic localization method
Weisenberger, Andrew G.
2002-01-01
The use of a slant-hole collimator in the gamma camera of dual mode stereotactic localization apparatus allows the acquisition of a stereo pair of scintimammographic images without repositioning of the gamma camera between image acquisitions.
A Vision System For A Mars Rover
NASA Astrophysics Data System (ADS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1987-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
A vision system for a Mars rover
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1988-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
Virtual-stereo fringe reflection technique for specular free-form surface testing
NASA Astrophysics Data System (ADS)
Ma, Suodong; Li, Bo
2016-11-01
Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.
FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven
2011-01-01
High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.
Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version
NASA Astrophysics Data System (ADS)
Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.
2014-04-01
Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity
NASA Astrophysics Data System (ADS)
Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason
2016-07-01
NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km-wide swath, generating DTMs with 32-m spatial scale and 4-m vertical precision. These data also support characterization of surface clutter for interpretation of radar deep and shallow sounding modes. Detectors: The cameras have identical rapid-readout, radiation-hard 4k x 2k CMOS detectors and can image in both pushbroom and framing modes. Color observations are acquired by pushbroom imaging using six broadband filters (~300-1050 nm), allowing mapping of surface units for correlation with geologic structures, topography, and compositional units from other instruments.
Spirit Beside 'Home Plate,' Sol 1809 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity's Surroundings on Sol 1798 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
NASA Astrophysics Data System (ADS)
Muller, Jan-Peter
2015-04-01
Understanding the role of scaling in different planetary surface processes within our Solar System is one of the fundamental goals of planetary and solid earth scientific research. There has been a revolution in planetary surface observations over the past decade for the Earth, Mars and the Moon, especially in 3D imaging of surface shape (from the planetary scale down to resolutions of 75cm). I will examine three areas that I have been active in over the last 25 years giving examples of newly processed global datasets ripe for scaling analysis: topography, BRDF/albedo and imaging. For understanding scaling in terrestrial land surface topography we now have global 30m digital elevation models (DEMs) from different types of sensors (InSAR and stereo-optical) along with laser altimeter data to provide global reference models (to better than 1m in cross-over areas) and airborne laser altimeter data over small areas at resolutions better than 1m and height accuracies better than 10-15cm. We also have an increasing number of sub-surface observations from long wavelength SAR in arid regions, which will allow us to look at the true surface rather than the one buried by sand. We also still have a major limitation of these DEMs in that they represent an unknown observable surface with C-band InSAR DEMs representing being somewhere near the top of the canopy and X-band InSAR and stereo near the top of the canopy but only P-band representing the true understorey surface. I will present some of the recent highlights of topography on Mars including 3D modelling of surface shape from the ESA Mars Express HRSC (High Resolution Stereo Camera), see [1], [2] at 30-100m grid-spacing; and then co-registered to HRSC using a resolution cascade of 20m DTMs from NASA MRO stereo-CTX and 0.75m digital terrain models (as there is no land cover on Mars) DTMs from MRO stereo-HiRISE [3]. Comparable DTMs now exist for the Moon from 100m up to 1m. I will show examples of these DEM/DTM datasets along with some simple analyses of their scaling properties. Global 1km, 8-daily terrestrial land surface BRDF/albedo maps exist for US sensors from MODIS and by orbit from MISR. More recently, the ESA GlobAlbedo project [4] has produced land surface datasets on the same spatio-temporal sampling using optimal estimation with full uncertainty matrices associated with each and every 1km pixel. By exploiting these uncertainty estimates I show how upscaling can be performed as well as analysing their scaling properties. Recently, a very novel technique for the super-resolution restoration (SRR) of stacks of images has been developed at UCL [5]. First examples shown will be of the entire MER-A Spirit rover traverse taking a stack of 25cm HiRISE to generate a corridor of SRR images along the rover traverse of 5cm imagery of unresolved features such as rocks, created as a consequence of meteoritic bombardment, ridge and valley features. This SRR technique will allow us for ≈400 areas on Mars (where 5 or more HiRISE images have been captured) and similar numbers on the Moon to resolve sub-pixel features. Examples will be shown of how these SRR images can be employed to assist with the better understanding of surface geomorphology. Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under PRoViDE grant agreement n˚312377 and the ESA GlobAlbedo project. Partial support is also provided from the STFC "MSSL Consolidated Grant" ST/K000977/1. References: [1] Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; [2] Gwinner, K., Muller, J-P., et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). Geophysical Research Abstracts, Vol. 17, EGU2015-13832; [3] Kim, J., & Muller, J. (2009). Multi-resolution topographic data extraction from Martian stereo imagery. Planetary and Space Science, 57, 2095-2112. doi:10.1016/j.pss.2009.09.024; [4] Muller, J.-P., et al. (2011), The ESA GlobAlbedo Project for mapping the Earth's land surface albedo for 15 Years from European Sensors., Geophysical Research Abstracts, Vol. 13, EGU2011-10969; [5] Tao, Y., Muller, J.-P. (2015) Supporting lander and rover operation: a novel super-resolution restoration technique. Geophysical Research Abstracts, Vol. 17, EGU2015-6925
CHAMP - Camera, Handlens, and Microscope Probe
NASA Technical Reports Server (NTRS)
Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.
AOTF near-IR spectrometers for study of Lunar and Martian surface composition
NASA Astrophysics Data System (ADS)
Korablev, O.; Kiselev, A.; Vyazovetskiy, N.; Fedorova, A.; Evdokimova, N.; Stepanov, A.; Titov, A.; Kalinnikov, Y.; Kuzmin, R. O.; Bazilevsky, A. T.; Bondarenko, A.; Moiseev, P.
2013-09-01
The series of the AOTF near-IR spectrometers is developed in Moscow Space Research Institute for study of Lunar and Martian surface composition in the vicinity of a lander or a rover. Lunar Infrared Spectrometer (LIS) is an experiment onboard Luna-Glob (launch in 2015) and Luna-Resurs (launch in 2017) Russian surface missions. The LIS is mounted on the mechanic arm of landing module in the field of view (45°) of stereo TV camera. Infrared Spectrometer for ExoMars (ISEM) is an experiment onboard ExoMars (launch in 2018) ESARoscosmos rover. The ISEM instrument is mounted on the rover's mast together with High Resolution camera (HRC). Spectrometers will provide measurements of selected surface area in the spectral range of 1.15-3.3 μm. The electrically commanded acousto-optic filter scans sequentially at a desired sampling, with random access, over the entire spectral range.
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Event-Based Stereo Depth Estimation Using Belief Propagation.
Xie, Zhen; Chen, Shengyong; Orchard, Garrick
2017-01-01
Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.
Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration
Chen, Shoubin; Liu, Jingbin; Huang, Wenchao
2018-01-01
The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540
NASA Astrophysics Data System (ADS)
Combe, J.; Adams, J. B.; McCord, T. B.
2006-12-01
Geological units at the surface of Mars can be investigated through the analysis of spatial changes of both its composition and its superficial structural properties. The color images provided by the High Resolution Stereo Camera (HRSC) are a multispectral dataset with an unprecedented high spatial resolution. We focused this study on the western chasmas of Valles Marineris with the neighboring plateau. Using the four-wavelength spectra of HRSC, the two types of surface color units (bright red and dark bluish material) plus a shade/shadow component can explain most of the variations [1]. An objective is to provide maps of the relative abundances that are independent of shade [2]. The spectral shape of the shade spectrum is calculated from the data. Then, Spectral Mixture Analysis of the two main materials and shade is performed. The shade gives us indications about variations in the surface roughness in the context of the mixtures of spectral/mineralogical materials. For mapping the different geological units at the surface at high spatial resolution, a correspondence between the color and the mineralogy is needed, aided by direct and more precise identifications of the composition of Mars. The joint analysis of HRSC and results from the OMEGA imaging spectrometer makes the most of their respective abilities [1]. Ferric oxides are present in bright red materials both in the chasmas and on the plateau [1] and they are often mixed with dark materials identified as basalts containing pyroxenes [4]. In Valles Marineris, salt deposits (bright) have been reported by using OMEGA [3], along with ferric oxides [4, 5] that appear relatively dark. The detailed spatial distribution of these materials is a key to understand the geology. Examples will be presented. [1] McCord T. B., et al. 2006, JGR, submitted. [2] Adams J. B. And Gillespie A. R., 2006, Cambridge University Press, 362 pp. [3] Le Mouelic S. et al., 2006, LPSC #1409. [4] Gendrin et al. (2005), LPSC #1858. [5] Gendrin A. et al., 2005, Science, 307, 1587-1591. [6] Le Deit et al., 2006, LPSC #2115.
Stereo imaging velocimetry for microgravity applications
NASA Technical Reports Server (NTRS)
Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.
1994-01-01
Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.
Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David; Kiser, Jillian; McQueen, Sarah
2016-11-01
Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.
Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3
NASA Astrophysics Data System (ADS)
Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.
2014-12-01
The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.
BRDF invariant stereo using light transport constancy.
Wang, Liang; Yang, Ruigang; Davis, James E
2007-09-01
Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
Enabling Autonomous Navigation for Affordable Scooters.
Liu, Kaikai; Mulky, Rajathswaroop
2018-06-05
Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.
Object Tracking Vision System for Mapping the UCN τ Apparatus Volume
NASA Astrophysics Data System (ADS)
Lumb, Rowan; UCNtau Collaboration
2016-09-01
The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.
NASA Technical Reports Server (NTRS)
Hasler, A. F.
1981-01-01
Observations of cloud geometry using scan-synchronized stereo geostationary satellites having images with horizontal spatial resolution of approximately 0.5 km, and temporal resolution of up to 3 min are presented. The stereo does not require a cloud with known emissivity to be in equilibrium with an atmosphere with a known vertical temperature profile. It is shown that absolute accuracies of about 0.5 km are possible. Qualitative and quantitative representations of atmospheric dynamics were shown by remapping, display, and stereo image analysis on an interactive computer/imaging system. Applications of stereo observations include: (1) cloud top height contours of severe thunderstorms and hurricanes, (2) cloud top and base height estimates for cloud-wind height assignment, (3) cloud growth measurements for severe thunderstorm over-shooting towers, (4) atmospheric temperature from stereo heights and infrared cloud top temperatures, and (5) cloud emissivity estimation. Recommendations are given for future improvements in stereo observations, including a third GOES satellite, operational scan synchronization of all GOES satellites and better resolution sensors.
Evaluating RGB photogrammetry and multi-temporal digital surface models for detecting soil erosion
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Seeger, Manuel
2013-04-01
Photogrammetry is a widely used tool for generating high-resolution digital surface models. Unmanned Aerial Vehicles (UAVs), equipped with a Red Green Blue (RGB) camera, have great potential in quickly acquiring multi-temporal high-resolution orthophotos and surface models. Such datasets would ease the monitoring of geomorphological processes, such as local soil erosion and rill formation after heavy rainfall events. In this study we test a photogrammetric setup to determine data requirements for soil erosion studies with UAVs. We used a rainfall simulator (5 m2) and above a rig with attached a Panasonic GX1 16 megapixel digital camera and 20mm lens. The soil material in the simulator consisted of loamy sand at an angle of 5 degrees. Stereo pair images were taken before and after rainfall simulation with 75-85% overlap. Acquired images were automatically mosaicked to create high-resolution orthorectified images and digital surface models (DSM). We resampled the DSM to different spatial resolutions to analyze the effect of cell size to the accuracy of measured rill depth and soil loss estimations, and determined an optimal cell size (thus flight altitude). Furthermore, the high spatial accuracy of the acquired surface models allows further analysis of rill formation and channel initiation related to e.g. surface roughness. We suggest implementing near-infrared and temperature sensors to combine soil moisture and soil physical properties with surface morphology for future investigations.
Support Routines for In Situ Image Processing
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean
2013-01-01
This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.
Lunar geodesy and cartography: a new era
NASA Astrophysics Data System (ADS)
Duxbury, Thomas; Smith, David; Robinson, Mark; Zuber, Maria T.; Neumann, Gregory; Danton, Jacob; Oberst, Juergen; Archinal, Brent; Glaeser, Philipp
The Lunar Reconnaissance Orbiter (LRO) ushers in a new era in precision lunar geodesy and cartography. LRO was launched in June, 2009, completed its Commissioning Phase in Septem-ber 2009 and is now in its Primary Mission Phase on its way to collecting high precision, global topographic and imaging data. Aboard LRO are the Lunar Orbiter Laser Altimeter (LOLA -Smith, et al., 2009) and the Lunar Reconnaissance Orbiter Camera (LROC -Robinson, et al., ). LOLA is a derivative of the successful MOLA at Mars that produced the global reference surface being used for all precision cartographic products. LOLA produces 5 altimetry spots having footprints of 5 m at a frequency of 28 Hz, significantly bettering MOLA that produced 1 spot having a footprint of 150 m at a frequency of 10 Hz. LROC has twin narrow angle cameras having pixel resolutions of 0.5 meters from a 50 km orbit and a wide-angle camera having a pixel resolution of 75 m and in up to 7 color bands. One of the two NACs looks to the right of nadir and the other looks to the left with a few hundred pixel overlap in the nadir direction. LOLA is mounted on the LRO spacecraft to look nadir, in the overlap region of the NACs. The LRO spacecraft has the ability to look nadir and build up global coverage as well as looking off-nadir to provide stereo coverage and fill in data gaps. The LROC wide-angle camera builds up global stereo coverage naturally from its large field-of-view overlap from orbit to orbit during nadir viewing. To date, the LROC WAC has already produced global stereo coverage of the lunar surface. This report focuses on the registration of LOLA altimetry to the LROC NAC images. LOLA has a dynamic range of tens of km while producing elevation data at sub-meter precision. LOLA also has good return in off-nadir attitudes. Over the LRO mission, multiple LOLA tracks will be in each of the NAC images at the lunar equator and even more tracks in the NAC images nearer the poles. The registration of LOLA altimetry to NAC images is aided by the 5 spots showing regional and local slopes, along and cross-track, that are easily correlated visually to features within the images. Once can precisely register each of the 5 LOLA spots to specific pixels in LROC images of distinct features such as craters and boulders. This can be performed routinely for features at the 100 m level and larger. However, even features at the several m level can also be registered if a single LOLA spots probes the depth of a small crater while the other 4 spots are on the surrounding surface or one spot returns from the top of a small boulder seen by NAC. The automatic registration of LOLA tracks with NAC stereo digital terrain models should provide for even higher accuracy. Also the LOLA pulse spread of the returned signal, which is sensitive to slopes and roughness, is an additional source of information to help match the LOLA tracks to the images As the global coverage builds, LOLA will provide absolute coordinates in latitude, longitude and radius of surface features with accuracy at the meter level or better. The NAC images will then be reg-istered to the LOLA reference surface in the production of precision, controlled photomosaics, having spatial resolutions as good as 0.5 m/pixel. For hundreds of strategic sites viewed in stereo, even higher precision and more complete surface coverage is possible for the produc-tion of digital terrain models and mosaics. LRO, with LOLA and LROC, will improve the relative and absolute accuracy of geodesy and cartography by orders of magnitude, ushering in a new era for lunar geodesy and cartography. Robinson, M., et al., Space Sci. Rev., DOI 10.1007/s11214-010-9634-2, Date: 2010-02-23, in press. Smith, D., et al., Space Sci. Rev., DOI 10.1007/s11214-009-9512-y, published online 16 May 2009.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
NASA Astrophysics Data System (ADS)
Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu
2017-04-01
A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover position can be measured to aid prioritisation of science targeting. Where grains or rocks are present and visible, their dimensions can be measured. Interpretation of the sedimentological features of the outcrops has also been carried out. OPCs created from stereo imagery collected in the Hanskville-Burpee Quarry showed a general coarsening-up succession with a red, well-layered mudstone overlain by stacked layers of irregular thickness and medium-coarse to pebbly sandstone layers. Cross beds/laminations, and lenses of finer sandstone were common. These features provide valuable information on their depositional environment. Development of Pro3D in preparation for application to the ExoMars 2020 and NASA 2020 missions will be centred on validation of the data and measurements. Collection of in-situ field data by a human geologist allows for direct comparison of viewer-derived measurements with those taken in the field. The research leading to these results has received funding from the UK Space Agency Aurora programme and the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE, ESA PRODEX Contracts 4000105568 "ExoMars PanCam 3D Vision" and 4000116566 "Mars 2020 Mastcam-Z 3D Vision".
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
2005-09-11
Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. The first two images in this sequence show gradual enhancements in the surface detail of Mars' largest moon, Phobos, made possible through a combination technique known as "stacking." In "stacking," scientists use a mathematical process known as Laplacian sharpening to reinforce features that appear consistently in repetitive images and minimize features that show up only intermittently. In this view of Phobos, the large crater named Stickney is just out of sight on the moon's upper right limb. Spirit acquired the first two images with the panoramic camera on the night of sol 585 (Aug. 26,2005). The far right image of Phobos, for comparison, was taken by the High Resolution Stereo Camera on Mars Express, a European Space Agency orbiter. The third image in this sequence was derived from the far right image by making it blurrier for comparison with the panoramic camera images to the left http://photojournal.jpl.nasa.gov/catalog/PIA06335
Railway clearance intrusion detection method with binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhou, Xingfang; Guo, Baoqing; Wei, Wei
2018-03-01
In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.
The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.
Chandraker, Manmohan
2016-07-01
Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Cheng, Victor S; Bai, Jinfen; Chen, Yazhu
2009-11-01
As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Dent detection method by high gradation photometric stereo
NASA Astrophysics Data System (ADS)
Hasebe, Akihisa; Kato, Kunihito; Tanahashi, Hideki; Kubota, Naoki
2017-03-01
This paper describes an automatic detection method for small dents on a metal plate. We adopted the photometric stereo as a three-dimensional measurement method, which has advantages in terms of low cost and short measurement time. In addition, a high precision measurement system was realized by using an 18bit camera. Furthermore, the small dent on the surface of the metal plate is detected by the inner product of the measured normal vectors using photometric stereo. Finally, the effectiveness of our method was confirmed by detection experiments.
Development of a stereo 3-D pictorial primary flight display
NASA Technical Reports Server (NTRS)
Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille
1989-01-01
Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.
Stereoscopic image production: live, CGI, and integration
NASA Astrophysics Data System (ADS)
Criado, Enrique
2006-02-01
This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.
A digital system for surface reconstruction
Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.
1996-01-01
A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.
Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2017-01-25
Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.
2015-12-01
The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.
The spacecraft control laboratory experiment optical attitude measurement system
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Montgomery, Raymond C.; Barsky, Michael F.
1991-01-01
A stereo camera tracking system was developed to provide a near real-time measure of the position and attitude of the Spacecraft COntrol Laboratory Experiment (SCOLE). The SCOLE is a mockup of the shuttle-like vehicle with an attached flexible mast and (simulated) antenna, and was designed to provide a laboratory environment for the verification and testing of control laws for large flexible spacecraft. Actuators and sensors located on the shuttle and antenna sense the states of the spacecraft and allow the position and attitude to be controlled. The stereo camera tracking system which was developed consists of two position sensitive detector cameras which sense the locations of small infrared LEDs attached to the surface of the shuttle. Information on shuttle position and attitude is provided in six degrees-of-freedom. The design of this optical system, calibration, and tracking algorithm are described. The performance of the system is evaluated for yaw only.
Bias Reduction and Filter Convergence for Long Range Stereo
NASA Technical Reports Server (NTRS)
Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav
2005-01-01
We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.
Layers of 'Cabo Frio' in 'Victoria Crater' (Stereo)
NASA Technical Reports Server (NTRS)
2006-01-01
This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is a red-blue stereo anaglyph generated from images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 430-nanometer filters.Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
Layers of 'Cape Verde' in 'Victoria Crater' (Stereo)
NASA Technical Reports Server (NTRS)
2006-01-01
This view of Victoria crater is looking north from 'Duck Bay' towards the dramatic promontory called 'Cape Verde.' The dramatic cliff of layered rocks is about 50 meters (about 165 feet) away from the rover and is about 6 meters (about 20 feet) tall. The taller promontory beyond that is about 100 meters (about 325 feet) away, and the vista beyond that extends away for more than 400 meters (about 1300 feet) into the distance. This is a red-blue stereo anaglyph generated from images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 430-nanometer filters.Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter
NASA Astrophysics Data System (ADS)
Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois
2018-01-01
There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.
Photogrammetric Analysis of Rotor Clouds Observed during T-REX
NASA Astrophysics Data System (ADS)
Romatschke, U.; Grubišić, V.
2017-12-01
Stereo photogrammetric analysis is a rarely utilized but highly valuable tool for studying smaller, highly ephemeral clouds. In this study, we make use of data that was collected during the Terrain-induced Rotor Experiment (T-REX), which took place in Owens Valley, eastern California, in the spring of 2006. The data set consists of matched digital stereo photographs obtained at high temporal (on the order of seconds) and spatial resolution (limited by the pixel size of the cameras). Using computer vision techniques we have been able to develop algorithms for camera calibration, automatic feature matching, and ultimately reconstruction of 3D cloud scenes. Applying these techniques to images from different T-REX IOPs we capture the motion of clouds in several distinct mountain wave scenarios ranging from short lived lee wave clouds on an otherwise clear sky day to rotor clouds formed in an extreme turbulence environment with strong winds and high cloud coverage. Tracking the clouds in 3D space and time allows us to quantify phenomena such as vertical and horizontal movement of clouds, turbulent motion at the upstream edge of rotor clouds, the structure of the lifting condensation level, extreme wind shear, and the life cycle of clouds in lee waves. When placed into context with the existing literature that originated from the T-REX field campaign, our results complement and expand our understanding of the complex dynamics observed in a variety of different lee wave settings.
Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-06-13
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.
Stereo View of Martian Rock Target 'Funzie'
2018-02-08
The surface of the Martian rock target in this stereo image includes small hollows with a "swallowtail" shape characteristic of some gypsum crystals, most evident in the lower left quadrant. These hollows may have resulted from the original crystallizing mineral subsequently dissolving away. The view appears three-dimensional when seen through blue-red glasses with the red lens on the left. The scene spans about 2.5 inches (6.5 centimeters). This rock target, called "Funzie," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The stereo view combines two images taken from slightly different angles by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, with the camera about 4 inches (10 centimeters) above the target. Fig. 1 and Fig. 2 are the separate "right-eye" and "left-eye" images, taken on Jan. 11, 2018, during the 1,932nd Martian day, or sol, of the rover's work on Mars. Right-eye and left-eye images are available at https://photojournal.jpl.nasa.gov/catalog/PIA22212
Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-01-01
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864
Low-cost telepresence for collaborative virtual environments.
Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee
2007-01-01
We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
Magma rheology from 3D geometry of martian lava flows
NASA Astrophysics Data System (ADS)
Allemand, P.; Deschamps, A.; Lesaout, M.; Delacourt, C.; Quantin, C.; Clenet, H.
2012-04-01
Volcanism is an important geologic agent which has been recently active at the surface of Mars. The composition of individual lava flows is difficult to infer from spectroscopic data because of the absence of crystallized minerals and the possible cover of the flows by dust. The 3D geometry of lava flows provides an interesting alternative to infer the chemical composition of lavas and effusion rates. Indeed, chemical composition exerts a strong control on the viscosity and yield strength of the magma and global geometry of lava flow reflects its emplacement rate. Until recently, these studies where realized from 2D data. The third dimension, which is a key parameter, was deduced or supposed from local shadow measurements on MGS Themis IR images with an uncertainty of more than 500%. Recent CTX data (MRO mission) allow to compute Digital Elevation Model at a resolution of 1 or 2 pixels (5 to 10 m) with the help of Isis and the Ames Stereo Pipeline pipe line. The CTX images are first transformed in format readable by Isis. The external geometric parameters of the CTX camera are computed and added to the image header with Isis. During a correlation phase, the homologous pixels are searched on the pair of stereo images. Finally, the DEM is computed from the position of the homologous pixels and the geometrical parameters of the CTX camera. Twenty DEM have been computed from stereo images showing lava flows of various ages on the region of Cerberus, Elyseum, Daedalia and Amazonis planitia. The 3D parameters of the lava flows have been measured on the DEMs and tested against shadows measurement. These 3D parameters have been inverted to estimate the viscosity and the yield strength of the flow. The effusion rate has also been estimated. These parameters have been compared to those of similar lava flows of the East Pacific rise.
Dig Hazard Assessment Using a Stereo Pair of Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Trebi-Ollennu, Ashitey
2012-01-01
This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each stereo pair of mast images.
Video enhancement workbench: an operational real-time video image processing system
NASA Astrophysics Data System (ADS)
Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.
1993-01-01
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary
2011-01-01
TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants
Center for Coastline Security Technology, Year 3
2008-05-01
Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection
Human machine interface by using stereo-based depth extraction
NASA Astrophysics Data System (ADS)
Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Characterization of ASTER GDEM Elevation Data over Vegetated Area Compared with Lidar Data
NASA Technical Reports Server (NTRS)
Ni, Wenjian; Sun, Guoqing; Ranson, Kenneth J.
2013-01-01
Current researches based on areal or spaceborne stereo images with very high resolutions (less than 1 meter) have demonstrated that it is possible to derive vegetation height from stereo images. The second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM) is a state-of-the-art global elevation data-set developed by stereo images. However, the resolution of ASTER stereo images (15 meters) is much coarser than areal stereo images, and the ASTER GDEM is compiled products from stereo images acquired over 10 years. The forest disturbances as well as forest growth are inevitable in 10 years time span. In this study, the features of ASTER GDEM over vegetated areas under both flat and mountainous conditions were investigated by comparisons with lidar data. The factors possibly affecting the extraction of vegetation canopy height considered include (1) co-registration of DEMs; (2) spatial resolution of digital elevation models (DEMs); (3) spatial vegetation structure; and (4) terrain slope. The results show that accurate co-registration between ASTER GDEM and the National Elevation Dataset (NED) is necessary over mountainous areas. The correlation between ASTER GDEM minus NED and vegetation canopy height is improved from 0.328 to 0.43 by degrading resolutions from 1 arc-second to 5 arc-seconds and further improved to 0.6 if only homogenous vegetated areas were considered.
A Three-Line Stereo Camera Concept for Planetary Exploration
NASA Technical Reports Server (NTRS)
Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon
1997-01-01
This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.
Flank vents and graben as indicators of Late Amazonian volcanotectonic activity on Olympus Mons
NASA Astrophysics Data System (ADS)
Peters, S. I.; Christensen, P. R.
2017-03-01
Previous studies have focused on large-scale features on Olympus Mons, such as its flank terraces, the summit caldera complex, and the basal escarpment and aureole deposits. Here we identify and characterize previously unrecognized and unmapped small scale features to help further understand the volcanotectonic evolution of this enormous volcano. Using Context Camera, High Resolution Imaging Science Experiment, Thermal Emission Imaging System, High Resolution Stereo Camera Digital Terrain Model, and Mars Orbiter Laser Altimeter data, we identified and characterized the morphology and distribution of 60 flank vents and 84 grabens on Olympus Mons. We find that effusive eruptions have dominated volcanic activity on Olympus Mons in the Late Amazonian. Explosive eruptions were rare, implying volatile-poor magmas and/or a lack of magma-water interactions during the Late Amazonian. The distribution of flank vents suggests dike propagation of hundreds of kilometers and shallow magma storage. Small grabens, not previously observed in lower-resolution data, occur primarily on the lower flanks of Olympus Mons and indicate late-stage extensional tectonism. Based on superposition relationships, we have concluded two stages of development for Olympus Mons during the Late Amazonian: (1) primarily effusive resurfacing and formation of flank vents followed by (2) waning effusive volcanism and graben formation and/or reactivation. This developmental sequence resembles that proposed for Ascraeus Mons and other large Martian shields, suggesting a similar geologic evolution for these volcanoes.
Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data
NASA Astrophysics Data System (ADS)
Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie
2011-07-01
Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.
2010-01-01
open garage leading to the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the UAV...camera, the difficulty in detecting and navigating around obstacles using this non- stereo camera necessitated a precomputed map of all obstacles and
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
MRO's High Resolution Imaging Science Experiment (HiRISE): Polar Science Expectations
NASA Technical Reports Server (NTRS)
McEwen, A.; Herkenhoff, K.; Hansen, C.; Bridges, N.; Delamere, W. A.; Eliason, E.; Grant, J.; Gulick, V.; Keszthelyi, L.; Kirk, R.
2003-01-01
The Mars Reconnaissance Orbiter (MRO) is expected to launch in August 2005, arrive at Mars in March 2006, and begin the primary science phase in November 2006. MRO will carry a suite of remote-sensing instruments and is designed to routinely point off-nadir to precisely target locations on Mars for high-resolution observations. The mission will have a much higher data return than any previous planetary mission, with 34 Tbits of returned data expected in the first Mars year in the mapping orbit (255 x 320 km). The HiRISE camera features a 0.5 m telescope, 12 m focal length, and 14 CCDs. We expect to acquire approximately 10,000 observations in the primary science phase (approximately 1 Mars year), including approximately 2,000 images for 1,000 stereo targets. Each observation will be accompanied by a approximately 6 m/pixel image over a 30 x 45 km region acquired by MRO s context imager. Many HiRISE images will be full resolution in the center portion of the swath width and binned (typically 4x4) on the sides. This provides two levels of context, so we step out from 0.3 m/pixel to 1.2 m/pixel to 6 m/pixel (at 300 km altitude). We expect to cover approximately 1% of Mars at better than 1.2 m/pixel, approximately 0.1% at 0.3 m/pixel, approximately 0.1% in 3 colors, and approximately 0.05% in stereo. Our major challenge is to find the dey contacts, exposures and type morphologies to observe.
Object Detection using the Kinect
2012-03-01
Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
NASA Astrophysics Data System (ADS)
Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo
2012-01-01
In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Stark, Alexander; Oberst, Jürgen; Matz, Klaus-Dieter; Gwinner, Klaus; Roatsch, Thomas; Watters, Thomas R.
2017-08-01
We selected approximately 10,500 narrow-angle camera (NAC) and wide-angle camera (WAC) images of Mercury acquired from orbit by MESSENGER's Mercury Dual Imaging System (MDIS) with an average resolution of 150 m/pixel to compute a digital terrain model (DTM) for the H6 (Kuiper) quadrangle, which extends from 22.5°S to 22.5°N and from 288.0°E to 360.0°E. From the images, we identified about 21,100 stereo image combinations consisting of at least three images each. We applied sparse multi-image matching to derive approximately 250,000 tie-points representing 50,000 ground points. We used the tie-points to carry out a photogrammetric block adjustment, which improves the image pointing and the accuracy of the ground point positions in three dimensions from about 850 m to approximately 55 m. We then applied high-density (pixel-by-pixel) multi-image matching to derive about 45 billion tie-points. Benefitting from improved image pointing data achieved through photogrammetric block adjustment, we computed about 6.3 billion surface points. By interpolation, we generated a DTM with a lateral spacing of 221.7 m/pixel (192 pixels per degree) and a vertical accuracy of about 30 m. The comparison of the DTM with Mercury Laser Altimeter (MLA) profiles obtained over four years of MESSENGER orbital operations reveals that the DTM is geometrically very rigid. It may be used as a reference to identify MLA outliers (e.g., when MLA operated at its ranging limit) or to map offsets of laser altimeter tracks, presumably caused by residual spacecraft orbit and attitude errors. After the relevant outlier removals and corrections, MLA profiles show excellent agreement with topographic profiles from H6, with a root mean square height difference of only 88 m.
Full-parallax 3D display from stereo-hybrid 3D camera system
NASA Astrophysics Data System (ADS)
Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel
2018-04-01
In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.
Supporting lander and rover operation: a novel super-resolution restoration technique
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter
2015-04-01
Higher resolution imaging data is always desirable to critical rover engineering operations, such as landing site selection, path planning, and optical localisation. For current Mars missions, 25cm HiRISE images have been widely used by the MER & MSL engineering team for rover path planning and location registration/adjustment. However, 25cm is not high enough resolution to be able to view individual rocks (≤2m in size) or visualise the types of sedimentary features that rover onboard cameras might observe. Nevertheless, due to various physical constraints (e.g. telescope size and mass) from the imaging instruments themselves, one needs to be able to tradeoff spatial resolution and bandwidth. This means that future imaging systems are likely to be limited to resolve features larger than 25cm. We have developed a novel super-resolution algorithm/pipeline to be able to restore higher resolution image from the non-redundant sub-pixel information contained in multiple lower resolution raw images [Tao & Muller 2015]. We will demonstrate with experiments performed using 5-10 overlapped 25cm HiRISE images for MER-A, MER-B & MSL to resolve 5-10cm super resolution images that can be directly compared to rover imagery at a range of 5 metres from the rover cameras but in our case can be used to visualise features many kilometres away from the actual rover traverse. We will demonstrate how these super-resolution images together with image understanding software can be used to quantify rock size-frequency distributions as well as measure sedimentary rock layers for several critical sites for comparison with rover orthorectified image mosaic to demonstrate optimality of using our super-resolution resolved image to better support future lander and rover operation in future. We present the potential of super-resolution for virtual exploration to the ˜400 HiRISE areas which have been viewed 5 or more times and the potential application of this technique to all of the ESA ExoMars Trace Gas orbiter CaSSiS stereo, multi-angle and colour camera images from 2017 onwards. Acknowledgements: The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312377 PRoViDE.
Memoris, A Wide Angle Camera For Bepicolombo
NASA Astrophysics Data System (ADS)
Cremonese, G.; Memoris Team
In order to answer to the Announcement of Opportunity of ESA for the BepiColombo payload, we are working on a wide angle camera concept named MEMORIS (MEr- cury MOderate Resolution Imaging System). MEMORIS will performe stereoscopic images of the whole Mercury surface using two different channels at +/- 20 degrees from the nadir point. It will achieve a spatial resolution of 50m per pixel at 400 km from the surface (peri-Herm), corresponding to a vertical resolution of about 75m with the stereo performances. The scientific objectives will be addressed by MEMORIS may be identified as follows: Estimate of surface age based on crater counting Crater morphology and degrada- tion Stratigraphic sequence of geological units Identification of volcanic features and related deposits Origin of plain units from morphological observations Distribution and type of the tectonic structures Determination of relative age among the structures based on cross-cutting relationships 3D Tectonics Global mineralogical mapping of main geological units Identification of weathering products The last two items will come from the multispectral capabilities of the camera utilizing 8 to 12 (TBD) broad band filters. MEMORIS will be equipped by a further channel devoted to the observations of the tenuous exosphere. It will look at the limb on a given arc of the BepiColombo orbit, in so doing it will observe the exosphere above a surface latitude range of 25-75 degrees in the northern emisphere. The exosphere images will be obtained above the surface just observed by the other two channels, trying to find possible relantionship, as ground-based observations suggest. The exospheric channel will have four narrow-band filters centered on the sodium and potassium emissions and the adjacent continua.
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image
NASA Astrophysics Data System (ADS)
Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren
2012-01-01
The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.
Design issues for stereo vision systems used on tele-operated robotic platforms
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-02-01
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.
Apollo 8 Mission image,Farside of Moon
1968-12-21
Apollo 8,Farside of Moon. Image taken on Revolution 4. Camera Tilt Mode: Vertical Stereo. Sun Angle: 13. Original Film Magazine was labeled D. Camera Data: 70mm Hasselblad. Lens: 80mm; F-Stop: F/2.8; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Flight Date: December 21-27,1968.
FieldSAFE: Dataset for Obstacle Detection in Agriculture.
Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm
2017-11-09
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
FieldSAFE: Dataset for Obstacle Detection in Agriculture
Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik
2017-01-01
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383
Evaluation of off-road terrain with static stereo and monoscopic displays
NASA Technical Reports Server (NTRS)
Yorchak, John P.; Hartley, Craig S.
1990-01-01
The National Aeronautics and Space Administration is currently funding research into the design of a Mars rover vehicle. This unmanned rover will be used to explore a number of scientific and geologic sites on the Martian surface. Since the rover can not be driven from Earth in real-time, due to lengthy communication time delays, a locomotion strategy that optimizes vehicle range and minimizes potential risk must be developed. In order to assess the degree of on-board artificial intelligence (AI) required for a rover to carry out its' mission, researchers conducted an experiment to define a no AI baseline. In the experiment 24 subjects, divided into stereo and monoscopic groups, were shown video snapshots of four terrain scenes. The subjects' task was to choose a suitable path for the vehicle through each of the four scenes. Paths were scored based on distance travelled and hazard avoidance. Study results are presented with respect to: (1) risk versus range; (2) stereo versus monocular video; (3) vehicle camera height; and (4) camera field-of-view.
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
Opportunity's Surroundings on Sol 1818 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.NASA Astrophysics Data System (ADS)
Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun
2014-12-01
A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).
Study on portable optical 3D coordinate measuring system
NASA Astrophysics Data System (ADS)
Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao
2009-05-01
A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.
Fiber optic cable-based high-resolution, long-distance VGA extenders
NASA Astrophysics Data System (ADS)
Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon
2013-02-01
Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.
Global Mosaics of Pluto and Charon
2017-07-14
Global mosaics of Pluto and Charon projected at 300 meters (985 feet) per pixel that have been assembled from most of the highest resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) onboard New Horizons. Transparent, colorized stereo topography data generated for the encounter hemispheres of Pluto and Charon have been overlain on the mosaics. Terrain south of about 30°S on Pluto and Charon was in darkness leading up to and during the flyby, so is shown in black. "S" and "T" respectively indicate Sputnik Planitia and Tartarus Dorsa on Pluto, and "C" indicates Caleuche Chasma on Charon. All feature names on Pluto and Charon are informal. https://photojournal.jpl.nasa.gov/catalog/PIA21862
Planetary science: are there active glaciers on Mars?
Gillespie, Alan R; Montgomery, David R; Mushkin, Amit
2005-12-08
Head et al. interpret spectacular images from the Mars Express high-resolution stereo camera as evidence of geologically recent rock glaciers in Tharsis and of a piedmont ('hourglass') glacier at the base of a 3-km-high massif east of Hellas. They attribute growth of the low-latitude glaciers to snowfall during periods of increased spin-axis obliquity. The age of the hourglass glacier, considered to be inactive and slowly shrinking beneath a debris cover in the absence of modern snowfall, is estimated to be more than 40 Myr. Although we agree that the maximum glacier extent was climatically controlled, we find evidence in the images to support local augmentation of accumulation from snowfall through a mechanism that does not require climate change on Mars.
Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin
2015-12-05
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.
Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin
2015-01-01
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168
NASA Astrophysics Data System (ADS)
Wang, Binbin; Socolofsky, Scott A.
2015-10-01
Development, testing, and application of a deep-sea, high-speed, stereoscopic imaging system are presented. The new system is designed for field-ready deployment, focusing on measurement of the characteristics of natural seep bubbles and droplets with high-speed and high-resolution image capture. The stereo view configuration allows precise evaluation of the physical scale of the moving particles in image pairs. Two laboratory validation experiments (a continuous bubble chain and an airstone bubble plume) were carried out to test the calibration procedure, performance of image processing and bubble matching algorithms, three-dimensional viewing, and estimation of bubble size distribution and volumetric flow rate. The results showed that the stereo view was able to improve the individual bubble size measurement over the single-camera view by up to 90% in the two validation cases, with the single-camera being biased toward overestimation of the flow rate. We also present the first application of this imaging system in a study of natural gas seeps in the Gulf of Mexico. The high-speed images reveal the rigidity of the transparent bubble interface, indicating the presence of clathrate hydrate skins on the natural gas bubbles near the source (lowest measurement 1.3 m above the vent). We estimated the dominant bubble size at the seep site Sleeping Dragon in Mississippi Canyon block 118 to be in the range of 2-4 mm and the volumetric flow rate to be 0.2-0.3 L/min during our measurements from 17 to 21 July 2014.
3D terrain reconstruction using Chang’E-3 PCAM images
NASA Astrophysics Data System (ADS)
Chen, Wangli; Zeng, Xingguo; Zhang, Hongbo
2017-10-01
In order to improve understanding of the topography of Chang’E-3 landing site, 3D terrain models are reconstructed using PCMA images. PCAM (panoramic cameras) is a stereo camera system with a 27cm baseline on-board Yutu rover. It obtained panoramic images at four detection sites, and can achieve a resolution of 1.48mm/pixel at 10m. So the PCAM images reveal fine details of the detection region. In the method, SIFT is employed for feature description and feature matching. In addition to collinearity equations, the measure of baseline of the stereo system is also used in bundle adjustment to solve orientation parameters of all images. And then, pair-wise depth map computation is applied for dense surface reconstruction. Finally, DTM of the detection region is generated. The DTM covers an area with radius of about 20m, and centering at the location of the camera. In consequence of the design, each individual wheel of Yutu rover can leave three tracks on the surface of moon, and the width between the first and third track is 15cm, and these tracks are clear and distinguishable in images. So we chose the second detection site which is of the best ability of recognition of wheel tracks to evaluate the accuracy of the DTM. We measured the width of wheel tracks every 1.5m from the center of the detection region, and obtained 13 measures. It is noticed that the area where wheel tracks are ambiguous is avoided. Result shows that the mean value of wheel track width is 0.155m with a standard deviation of 0.007m. Generally, the closer to the center the more accurate the measure of wheel width is. This is due to the fact that the deformation of images aggravates with increase distance from the location of the camera, and this induces the decline of DTM quality in far areas. In our work, images of the four detection sites are adjusted independently, and this means that there is no tie point between different sites. So deviations between the locations of the same object measured from DTMs of adjacent detection sites may exist.
2017-07-14
On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto's largest moon, Charon, was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Charon's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. All feature names on Pluto and Charon are informal. The global mosaic has been overlain with transparent, colorized topography data wherever on their surfaces stereo data is available. Standing out on Charon is the Caleuche Chasma ("C") in the far north, an enormous trough at least 350 kilometers (nearly 220 miles) long, and reaching 14 kilometers (8.5 miles) deep -- more than seven times as deep as the Grand Canyon. https://photojournal.jpl.nasa.gov/catalog/PIA21860
Passive perception system for day/night autonomous off-road navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.
2005-05-01
Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.
Assessment of HRSC Digital Terrain Models Produced for the South Polar Residual Cap
NASA Astrophysics Data System (ADS)
Putri, Alfiah Rizky Diana; Sidiropoulos, Panagiotis; Muller, Jan-Peter
2017-04-01
The current Digital Terrain Models available for Mars consist of NASA MOLA (Mars Orbital Laser Altimeter) Digital Terrain Models with an average resolution of 112 m/ pixel (512 pixels/degree) for the polar region. The ESA/DLR High Resolution Stereo Camera is currently orbiting Mars and mapping its surface, 98% with resolution of ≤100 m/pixel and better and 100% at lower resolution [1]. It is possible to produce Digital Terrain Models from HRSC images using various methods. In this study, the method developed on Kim and Muller [2] which uses the VICAR open source program together with photogrammetry sofrware from DLR (Deutschen Zentrums für Luft- und Raumfahrt) with image matching based on the GOTCHA (Gruen-Otto-Chau) algorithm [3]. Digital Terrain Models have been processed over the South Pole with emphasis on areas around South Polar Residual Cap from High Resolution Stereo Camera images [4]. Digital Terrain Models have been produced for 31 orbits out of 149 polar orbits available. This study analyses the quality of the DTMs including an assessment of accuracy of elevations using the MOLA MEGDR (Mission Experiment Gridded Data Records) which has roughly 42 million MOLA PEDR (Precision Experiment Data Records) points between latitudes of 78 o -90 o S. The issues encountered in the production of Digital Terrain Models will be described and the statistical results and assessment method will be presented. The resultant DTMs will be accessible via http://i-Mars.eu/web-GIS References: [1] Neukum, G. et. al, 2004. Mars Express: The Scientific Payload pp. 17-35. [2] Kim, J.-R. and J.-P. Muller. 2009. PSS vol. 57, pp. 2095-2112. [3] Shin, D. and J.-P. Muller. 2012. Pattern Recognition, 45(10), 3795 -3809. [4] Putri, A.R. D., et al., Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B4, 463-469 Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n ˚ 607379. The first author would like to acknowledge support for her studies from Indonesia Endowment Fund for Education (LPDP), Ministry of Finance, Republic of Indonesia. The authors would also like to thank Alexander Dumke (Freie Universitaet Berlin) for providing the EXTORI exterior orientation elements which were critical in the production of accuracy geolocations.
A Small Lunar Rover for Reconnaissance in the Framework of ExoGeoLab Project, System Level Design
NASA Astrophysics Data System (ADS)
Noroozi, A.; Ha, L.; van Dalen, P.; Maas, A.; de Raedt, S.; Poulakis, P.; Foing, B. H.
2009-04-01
Scientific research is based on accurate measurement and so depends on the possibilities of accurate instruments. In planetary science and exploration it is often difficult or even impossible in some cases to gather accurate and direct information from a specified target. It is important to gather as much information as possible to be able to analyze and extract scientific data from them. One possibility to do so is to send equipments to the target and perform the measurements locally. The measurement data is then sent to base station for further analysis. To send measurement instruments to measurement point it is important to have a good estimation of the environmental situation there. This information can be collected by sending a pilot rover to the area of interest to collect visual information. The aim of this work is to develop a tele-operated small rover, Google Lunar X-Prize (GLXP) class, which is capable of surviving in the Moon environment and perform reconnaissance to provide visual information to base station of ExoGeoLab project of ESA/ESTEC. Using the state of the art developments in electronics, software and communication technologies allows us to achieve increase in accuracy while reducing size and power consumption. Target mass of the rover is lees than 5 kg and its target dimension is 300 x 60 x 80 mm3. The small size of the rover gives the possibility of accessing places which are normally out of reach. The required power for operation and the cost of launch is considerably reduced compared to large rovers which makes the mission more cost effective. The mission of the rover is to capture high resolution images and transmit them to base station. Data link between lover and base station is wireless and rover should supply its own energy. The base station can be either a habitat or a relay station. The navigation of the rover is controlled by an operator in a habitat who has a view from the stereo camera on the rover. This stereo camera gives image information to the base and gives the possibility for future autonomous navigation by using three-dimensional image recognition software. As the navigation view should have minimum delay, the resolution of stereo camera is not very high. The rover design is divided into four work packages. These work packages are remote imaging, remote manual navigation, locomotion and structure, and power system. Remote imaging work package is responsible for capturing high resolution images, transmitting image data to base station via wireless link and store the data for further processing. Remote manual navigation is handling the tele-operation. It collects stereo images and navigation sensor readouts, transmits stereo images and navigation data to base station via wireless link, displays the image and sensor status in a real-time fashion on operator's monitor, receives command from operator's joystick, transfers navigation commands to rover via wireless link, and operates the actuators accordingly. Locomotion and structure takes care of designing the body structure and locomotion system based on the Moon environment specifications. The target specifications of rover locomotion system are maximum speed of 200 m/h, maximum acceleration of 0.554 m/s2, and maximum slope angle of 20Ë . The power system for the rover includes the solar panel, batteries and power electronics mounted on the rover. The energy storage in the rover should be able to survive for minimum 500 m movement on the moon. Subsequently, it should provide energy for other sub-systems to communicate, navigate and transmit the data. Considering the harsh environmental issues on the Moon such as dust, temperature range and radiation, it is vital for the mission that these issues are considered in the design to correctly dimension reliability and if necessary redundancy. Corrosion resistive material should be used to ensure the survival of mechanical structure, moving parts and other sensitive parts such as electronics. High temperature variation should be considered in the design of structure and electronics and finally electronics should be radiation protected.
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
NASA Astrophysics Data System (ADS)
Breytenbach, A.
2016-10-01
Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.
NASA Technical Reports Server (NTRS)
Mollberg, Bernard H.; Schardt, Bruton B.
1988-01-01
The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.
3D imaging and wavefront sensing with a plenoptic objective
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.
2011-06-01
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
Investigation of Terrain Analysis and Classification Methods for Ground Vehicles
2012-08-27
exteroceptive terrain classifier takes exteroceptive sensor data (here, color stereo images of the terrain) as its input and returns terrain class...Mishkin & Laubach, 2006), the rover cannot safely travel beyond the distance it can image with its cameras, which has been as little as 15 meters or...field of view roughly 44°×30°, capturing pairs of color images at 640×480 pixels each (Videre Design, 2001). Range data were extracted from the stereo
Time for a Change; Spirit's View on Sol 1843 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.View Ahead After Spirit's Sol 1861 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
Family Of Calibrated Stereometric Cameras For Direct Intraoral Use
NASA Astrophysics Data System (ADS)
Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon
1983-07-01
In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.
Mapping and localization for extraterrestrial robotic explorations
NASA Astrophysics Data System (ADS)
Xu, Fengliang
In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
NASA Astrophysics Data System (ADS)
Hertel, R. J.; Hoilman, K. A.
1982-01-01
The effects of model vibration, camera and window nonlinearities, and aerodynamic disturbances in the optical path on the measurement of target position is examined. Window distortion, temperature and pressure changes, laminar and turbulent boundary layers, shock waves, target intensity and, target vibration are also studied. A general computer program was developed to trace optical rays through these disturbances. The use of a charge injection device camera as an alternative to the image dissector camera was examined.
4D Light Field Imaging System Using Programmable Aperture
NASA Technical Reports Server (NTRS)
Bae, Youngsam
2012-01-01
Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Top of Mars Rover Curiosity Remote Sensing Mast
2011-04-06
The remote sensing mast on NASA Mars rover Curiosity holds two science instruments for studying the rover surroundings and two stereo navigation cameras for use in driving the rover and planning rover activities.
Autonomous Rock Tracking and Acquisition from a Mars Rover
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Nesnas, Issa A.; Das, Hari
1999-01-01
Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.
Multiple pedestrian detection using IR LED stereo camera
NASA Astrophysics Data System (ADS)
Ling, Bo; Zeifman, Michael I.; Gibson, David R. P.
2007-09-01
As part of the U.S. Department of Transportations Intelligent Vehicle Initiative (IVI) program, the Federal Highway Administration (FHWA) is conducting R&D in vehicle safety and driver information systems. There is an increasing number of applications where pedestrian monitoring is of high importance. Visionbased pedestrian detection in outdoor scenes is still an open challenge. People dress in very different colors that sometimes blend with the background, wear hats or carry bags, and stand, walk and change directions unpredictably. The background is various, containing buildings, moving or parked cars, bicycles, street signs, signals, etc. Furthermore, existing pedestrian detection systems perform only during daytime, making it impossible to detect pedestrians at night. Under FHWA funding, we are developing a multi-pedestrian detection system using IR LED stereo camera. This system, without using any templates, detects the pedestrians through statistical pattern recognition utilizing 3D features extracted from the disparity map. A new IR LED stereo camera is being developed, which can help detect pedestrians during daytime and night time. Using the image differencing and denoising, we have also developed new methods to estimate the disparity map of pedestrians in near real time. Our system will have a hardware interface with the traffic controller through wireless communication. Once pedestrians are detected, traffic signals at the street intersections will change phases to alert the drivers of approaching vehicles. The initial test results using images collected at a street intersection show that our system can detect pedestrians in near real time.
Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles
Mou, Xiaozheng; Wang, Han
2018-01-01
This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293
Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel
1998-01-01
This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.
MISR CMVs and Multiangular Views of Tropical Cyclone Inner-Core Dynamics
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Diner, David J.; Garay, Michael J; Jovanovic, Veljko M.; Lee, Jae N.; Moroney, Catherine M.; Mueller, Kevin J.; Nelson, David L.
2010-01-01
Multi-camera stereo imaging of cloud features from the MISR (Multiangle Imaging SpectroRadiometer) instrument on NASA's Terra satellite provides accurate and precise measurements of cloud top heights (CTH) and cloud motion vector (CMV) winds. MISR observes each cloudy scene from nine viewing angles (Nadir, +/-26(sup o), +/-46(sup o), +/-60(sup o), +/-70(sup o)) with approximatel 275-m pixel resolution. This paper provides an update on MISR CMV and CTH algorithm improvements, and explores a high-resolution retrieval of tangential winds inside the eyewall of tropical cyclones (TC). The MISR CMV and CTH retrievals from the updated algorithm are significantly improved in terms of spatial coverage and systematic errors. A new product, the 1.1-km cross-track wind, provides high accuracy and precision in measuring convective outflows. Preliminary results obtained from the 1.1-km tangential wind retrieval inside the TC eyewall show that the inner-core rotation is often faster near the eyewall, and this faster rotation appears to be related linearly to cyclone intensity.
Keszthelyi, L.; Jaeger, W.; McEwen, A.; Tornabene, L.; Beyer, R.A.; Dundas, C.; Milazzo, M.
2008-01-01
In the first 6 months of the Mars Reconnaissance Orbiter's Primary Science Phase, the High Resolution Imaging Science Experiment (HiRISE) camera has returned images sampling the diversity of volcanic terrains on Mars. While many of these features were noted in earlier imaging, they are now seen with unprecedented clarity. We find that some volcanic vents produced predominantly effusive products while others generated mostly pyroclastics. Flood lavas were emplaced in both turbulent and gentle eruptions, producing roofed channels and inflation features. However, many areas on Mars are too heavily mantled to allow meter-scale volcanic features to be discerned. In particular, the major volcanic edifices are extensively mantled, though it is possible that some of the mantle is pyroclastic material rather than atmospheric dust. Support imaging by the Context Imager (CTX) and topographic information derived from stereo imaging are both invaluable in interpreting the HiRISE data. Copyright 2008 by the American Geophysical Union.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
The Eyephone: a head-mounted stereo display
NASA Astrophysics Data System (ADS)
Teitel, Michael A.
1990-09-01
Head mounted stereo displays for virtual environments and computer simulations have been made since 1969. Most of the recent displays have been based on monochrome (black and white) liquid crystal display technology. Color LCD displays have generally not been used due to their lower resolution and color triad structure. As the resolution of color LCDdisplays is increasing we have begun to use color displays in our Eyephone. In this paper we describe four methods for minimizing the effect of the color triads in the magnified images of LCD displays in the Eyephone stereo head mounted display. We have settled on the use of wavefront randomizer with a spatial frequency enhancement overlay in order to blur the triacis in the displays while keeping the perceived resolution of the display high.
Fifty Years of Mars Imaging: from Mariner 4 to HiRISE
2017-11-20
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115
Opportunity's Surroundings After Sol 1820 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.
Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal
2017-01-07
The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.
Efficient RPG detection in noisy 3D image data
NASA Astrophysics Data System (ADS)
Pipitone, Frank
2011-06-01
We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.
Numerous Seasonal Lineae on Coprates Montes, Mars
2016-07-07
The white arrows indicate locations in this scene where numerous seasonal dark streaks have been identified in the Coprates Montes area of Mars' Valles Marineris by repeated observations from orbit. The streaks, called recurring slope lineae or RSL, extend downslope during a warm season, fade in the colder part of the year, and repeat the process the next Martian year. They are regarded as the strongest evidence for the possibility of liquid water on the surface of modern Mars. This oblique perspective for this view uses a three-dimensional terrain model derived from a stereo pair of observations by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The scene covers an area approximately 1.6 miles (2.5 kilometers) wide. http://photojournal.jpl.nasa.gov/catalog/PIA20757
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Amili, Omid; Stanislas, Michel; Cuvier, Christophe; Foucaut, Jean-Marc; Srinath, Sricharan; Laval, Jean-Philippe; Kaehler, Christian; Hain, Rainer; Scharnowski, Sven; Schroeder, Andreas; Geisler, Reinhard; Agocs, Janos; Roese, Anni; Willert, Christian; Klinner, Joachim; Soria, Julio
2016-11-01
The study of adverse pressure gradient turbulent boundary layers is complicated by the need to characterise both the local pressure gradient and it's upstream flow history. It is therefore necessary to measure a significant streamwise domain at a resolution sufficient to resolve the small scales features. To achieve this collaborative particle image velocimetry (PIV) measurements were performed in the large boundary layer wind-tunnel at the Laboratoire de Mecanique de Lille, including: planar measurements spanning a streamwise domain of 3.5m using 16 cameras covering 15 δ spanwise wall-normal stereo-PIV measurements, high-speed micro-PIV of the near wall region and wall shear stress; and streamwise wall-normal PIV in the viscous sub layer. Details of the measurements and preliminary results will be presented.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Congruence analysis of point clouds from unstable stereo image sequences
NASA Astrophysics Data System (ADS)
Jepping, C.; Bethmann, F.; Luhmann, T.
2014-06-01
This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system
NASA Astrophysics Data System (ADS)
Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo
2010-02-01
A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.
Hardware platform for multiple mobile robots
NASA Astrophysics Data System (ADS)
Parzhuber, Otto; Dolinsky, D.
2004-12-01
This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.
Photogrammetric Processing Using ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Kornus, W.; Magariños, A.; Pla, M.; Soler, E.; Perez, F.
2015-03-01
This paper evaluates the stereoscopic capacities of the Chinese sensor ZiYuan-3 (ZY-3) for the generation of photogrammetric products. The satellite was launched on January 9, 2012 and carries three high-resolution panchromatic cameras viewing in forward (22º), nadir (0º) and backward direction (-22º) and an infrared multi-spectral scanner (IRMSS), which is slightly looking forward (6º). The ground sampling distance (GSD) is 2.1m for the nadir image, 3.5m for the two oblique stereo images and 5.8m for the multispectral image. The evaluated ZY-3 imagery consists of a full set of threefold-stereo and a multi-spectral image covering an area of ca. 50km x 50km north-west of Barcelona, Spain. The complete photogrammetric processing chain was executed including image orientation, the generation of a digital surface model (DSM), radiometric image correction, pansharpening, orthoimage generation and digital stereo plotting. All 4 images are oriented by estimating affine transformation parameters between observed and nominal RPC (rational polynomial coefficients) image positions of 17 ground control points (GCP) and a subsequent calculation of refined RPC. From 10 independent check points RMS errors of 2.2m, 2.0m and 2.7m in X, Y and H are obtained. Subsequently, a DSM of 5m grid spacing is generated fully automatically. A comparison with the Lidar data results in an overall DSM accuracy of approximately 3m. In moderate and flat terrain higher accuracies in the order of 2.5m and better are achieved. In a next step orthoimages from the high resolution nadir image and the multispectral image are generated using the refined RPC geometry and the DSM. After radiometric corrections a fused high resolution colour orthoimage with 2.1m pixel size is created using an adaptive HSL method. The pansharpen process is performed after the individual geocorrection due to the different viewing angles between the two images. In a detailed analysis of the colour orthoimage artifacts are detected covering an area of 4691ha, corresponding to less than 2% of the imaged area. Most of the artifacts are caused by clouds (4614ha). A minor part (77ha) is affected by colour patch, stripping or blooming effects. For the final qualitative analysis on the usability of the ZY-3 imagery for stereo plotting purposes stereo combinations of the nadir and an oblique image are discarded, mainly due to the different pixel size, which produces difficulties in the stereoscopic vision and poor accuracy in positioning and measuring. With the two oblique images a level of detail equivalent to 1:25.000 scale is achieved for transport network, hydrography, vegetation and elements to model the terrain as break lines. For settlement, including buildings and other constructions a lower level of detail is achieved equivalent to 1:50.000 scale.
2004-02-10
This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard NASA Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. 3D glasses are necessary to view this image.
Composite View from Phoenix Lander
2009-07-02
This mosaic of images from the Surface Stereo Imager camera on NASA Phoenix Mars Lander shows several trenches dug by Phoenix, plus a corner of the spacecraft deck and the Martian arctic plain stretching to the horizon.
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.
2009-01-01
The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses
Research of flaw image collecting and processing technology based on multi-baseline stereo imaging
NASA Astrophysics Data System (ADS)
Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan
2008-03-01
Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.
CMOS detectors: lessons learned during the STC stereo channel preflight calibration
NASA Astrophysics Data System (ADS)
Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.
2017-09-01
The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.
NASA Astrophysics Data System (ADS)
Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert
2014-05-01
Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.
NASA Astrophysics Data System (ADS)
Hofmann, Ulrich; Siedersberger, Karl-Heinz
2003-09-01
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
NASA Astrophysics Data System (ADS)
Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen
2017-03-01
Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p < 0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.
A search for Ganymede stereo images and 3D mapping opportunities
NASA Astrophysics Data System (ADS)
Zubarev, A.; Nadezhdina, I.; Brusnikin, E.; Giese, B.; Oberst, J.
2017-10-01
We used 126 Voyager-1 and -2 as well as 87 Galileo images of Ganymede and searched for stereo images suitable for digital 3D stereo analysis. Specifically, we consider image resolutions, stereo angles, as well as matching illumination conditions of respective stereo pairs. Lists of regions and local areas with stereo coverage are compiled. We present anaglyphs and we selected areas, not previously discussed, for which we constructed Digital Elevation Models and associated visualizations. The terrain characteristics in the models are in agreement with our previous notion of Ganymede morphology, represented by families of lineaments and craters of various sizes and degradation stages. The identified areas of stereo coverage may serve as important reference targets for the Ganymede Laser Altimeter (GALA) experiment on the future JUICE (Jupiter Icy Moons Explorer) mission.
Offshore remote sensing of the ocean by stereo vision systems
NASA Astrophysics Data System (ADS)
Gallego, Guillermo; Shih, Ping-Chang; Benetazzo, Alvise; Yezzi, Anthony; Fedele, Francesco
2014-05-01
In recent years, remote sensing imaging systems for the measurement of oceanic sea states have attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms, which focus on sea states with wavelengths in the range of 0.01 m to 1 m. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting oberved waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss furure lines of research to improve their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters or the processing time that it takes to retrieve ocean wave measurements from the stereo videos, which are very large datasets that need to be processed efficiently to be of practical usage. Multiresolution and short-time approaches would improve efficiency and scalability of the techniques so that wave displacements are obtained in feasible times.
Analysis of dark albedo features on a southern polar dune field of Mars.
Horváth, András; Kereszturi, Akos; Bérczi, Szaniszló; Sik, András; Pócs, Tamás; Gánti, Tibor; Szathmáry, Eörs
2009-01-01
We observed 20-200 m sized low-albedo seepage-like streaks and their annual change on defrosting polar dunes in the southern hemisphere of Mars, based on the Mars Orbiter Camera (MOC), High Resolution Stereo Camera (HRSC), and High Resolution Imaging Science Experiment (HiRISE) images. The structures originate from dark spots and can be described as elongated or flowlike and, at places, branching streaks. They frequently have another spotlike structure at their end. Their overall appearance and the correlation between their morphometric parameters suggest that some material is transported downward from the spots and accumulates at the bottom of the dune's slopes. Here, we present possible scenarios for the origin of such streaks, including dry avalanche, liquid CO(2), liquid H(2)O, and gas-phase CO(2). Based on their morphology and the currently known surface conditions of Mars, no model interprets the streaks satisfactorily. The best interpretation of only the morphology and morphometric characteristics is only given by the model that implies some liquid water. The latest HiRISE images are also promising and suggest liquid flow. We suggest, with better knowledge of sub-ice temperatures that result from extended polar solar insolation and the heat insulator capacity of water vapor and water ice, future models and measurements may show that ephemeral water could appear and flow under the surface ice layer on the dunes today.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
High-precision real-time 3D shape measurement based on a quad-camera system
NASA Astrophysics Data System (ADS)
Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao
2018-01-01
Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.
Rover imaging system for the Mars rover/sample return mission
NASA Technical Reports Server (NTRS)
1993-01-01
In the past year, the conceptual design of a panoramic imager for the Mars Environmental Survey (MESUR) Pathfinder was finished. A prototype camera was built and its performace in the laboratory was tested. The performance of this camera was excellent. Based on this work, we have recently proposed a small, lightweight, rugged, and highly capable Mars Surface Imager (MSI) instrument for the MESUR Pathfinder mission. A key aspect of our approach to optimization of the MSI design is that we treat image gathering, coding, and restoration as a whole, rather than as separate and independent tasks. Our approach leads to higher image quality, especially in the representation of fine detail with good contrast and clarity, without increasing either the complexity of the camera or the amount of data transmission. We have made significant progress over the past year in both the overall MSI system design and in the detailed design of the MSI optics. We have taken a simple panoramic camera and have upgraded it substantially to become a prototype of the MSI flight instrument. The most recent version of the camera utilizes miniature wide-angle optics that image directly onto a 3-color, 2096-element CCD line array. There are several data-taking modes, providing resolution as high as 0.3 mrad/pixel. Analysis tasks that were performed or that are underway with the test data from the prototype camera include the following: construction of 3-D models of imaged scenes from stereo data, first for controlled scenes and later for field scenes; and checks on geometric fidelity, including alignment errors, mast vibration, and oscillation in the drive system. We have outlined a number of tasks planned for Fiscal Year '93 in order to prepare us for submission of a flight instrument proposal for MESUR Pathfinder.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.An evaluation of onshore digital elevation models for tsunami inundation modelling
NASA Astrophysics Data System (ADS)
Griffin, J.; Latief, H.; Kongko, W.; Harig, S.; Horspool, N.; Hanung, R.; Rojali, A.; Maher, N.; Fountain, L.; Fuchs, A.; Hossen, J.; Upi, S.; Dewanto, S. E.; Cummins, P. R.
2012-12-01
Tsunami inundation models provide fundamental information about coastal areas that may be inundated in the event of a tsunami along with additional parameters such as flow depth and velocity. This can inform disaster management activities including evacuation planning, impact and risk assessment and coastal engineering. A fundamental input to tsunami inundation models is adigital elevation model (DEM). Onshore DEMs vary widely in resolution, accuracy, availability and cost. A proper assessment of how the accuracy and resolution of DEMs translates into uncertainties in modelled inundation is needed to ensure results are appropriately interpreted and used. This assessment can in turn informdata acquisition strategies depending on the purpose of the inundation model. For example, lower accuracy elevation data may give inundation results that are sufficiently accurate to plan a community's evacuation route but not sufficient to inform engineering of a vertical evacuation shelters. A sensitivity study is undertaken to assess the utility of different available onshore digital elevation models for tsunami inundation modelling. We compare airborne interferometric synthetic aperture radar (IFSAR), ASTER and SRTM against high resolution (<1 m horizontal resolution, < 0.15 m vertical accuracy) LiDAR or stereo-camera data in three Indonesian locations with different coastal morphologies (Padang, West Sumatra; Palu, Central Sulawesi; and Maumere, Flores), using three different computational codes (ANUGA, TUNAMI-N3 and TsunAWI). Tsunami inundation extents modelled with IFSAR are comparable with those modelled with the high resolution datasets and with historical tsunami run-up data. Large vertical errors (> 10 m) and poor resolution of the coastline in the ASTER and SRTM elevation models cause modelled inundation to be much less compared with models using better data and with observations. Therefore we recommend that ASTER and SRTM should not be used for modelling tsunami inundation in order to determine tsunami extent or any other measure of onshore tsunami hazard. We suggest that for certain disaster management applications where the important factor is the extent of inundation, such as evacuation planning, airborne IFSAR provides a good compromise between cost and accuracy; however the representation of flow parameters such as depth and velocity is not sufficient to inform detailed engineering of structures. Differences in modelled inundation extent between digital terrain models (DTM) and digital surface models (DSM) for LiDAR, high resolution stereo-camera and airborne IFSAR data are greater than differences between the data types. The presence of trees and buildings as solid elevation in the DSM leads to underestimated inundation extents compared with observations, while removal of these features in the DTM causes more extensive inundation. Further work is needed to resolve whether DTM or DSM should be used and, in particular for DTM, how and at what spatial scale roughness should be parameterized to appropriately account for the presence of buildings and vegetation. We also test model mesh resolutions up to 0.8 m but find that there are only negligible changes in inundation extent between 0.8 and 25 m mesh resolution, even using the highest resolution elevation data.
2004-02-02
This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.
Integration of prior knowledge into dense image matching for video surveillance
NASA Astrophysics Data System (ADS)
Menze, M.; Heipke, C.
2014-08-01
Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen
2016-03-07
We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.
NASA Astrophysics Data System (ADS)
Sari, N. M.; Nugroho, J. T.; Chulafak, G. A.; Kushardono, D.
2018-05-01
Coastal is an ecosystem that has unique object and phenomenon. The potential of the aerial photo data with very high spatial resolution covering coastal area is extensive. One of the aerial photo data can be used is LAPAN Surveillance UAV 02 (LSU-02) photo data which is acquired in 2016 with a spatial resolution reaching 10cm. This research aims to create an initial bathymetry model with stereo photogrammetry technique using LSU-02 data. In this research the bathymetry model was made by constructing 3D model with stereo photogrammetry technique that utilizes the dense point cloud created from overlapping of those photos. The result shows that the 3D bathymetry model can be built with stereo photogrammetry technique. It can be seen from the surface and bathymetry transect profile.
HRSCview: a web-based data exploration system for the Mars Express HRSC instrument
NASA Astrophysics Data System (ADS)
Michael, G.; Walter, S.; Neukum, G.
2007-08-01
The High Resolution Stereo Camera (HRSC) on the ESA Mars Express spacecraft has been orbiting Mars since January 2004. By spring 2007 it had returned around 2 terabytes of image data, covering around 35% of the Martian surface in stereo and colour at a resolu-tion of 10-20 m/pixel. HRSCview provides a rapid means to explore these images up to their full resolu-tion with the data-subsetting, sub-sampling, stretching and compositing being carried out on-the-fly by the image server. It is a joint website of the Free University of Berlin and the German Aerospace Center (DLR). The system operates by on-the-fly processing of the six HRSC level-4 image products: the map-projected ortho-rectified nadir pan-chromatic and four colour channels, and the stereo-derived DTM (digital terrain model). The user generates a request via the web-page for an image with several parameters: the centre of the view in surface coordinates, the image resolution in metres/pixel, the image dimensions, and one of several colour modes. If there is HRSC coverage at the given location, the necessary segments are extracted from the full orbit images, resampled to the required resolution, and composited according to the user's choice. In all modes the nadir channel, which has the highest resolu-tion, is included in the composite so that the maximum detail is always retained. The images are stretched ac-cording to the current view: this applies to the eleva-tion colour scale, as well as the nadir brightness and the colour channels. There are modes for raw colour, stretched colour, enhanced colour (exaggerated colour differences), and a synthetic 'Mars-like' colour stretch. A colour ratio mode is given as an alternative way to examine colour differences (R=IR/R, G=R/G and B=G/B). The final image is packaged as a JPEG file and returned to the user over the web. Each request requires approximately 1 second to process. A link is provided from each view to a data product page, where header items describing the full map-projected science data product are displayed, and a direct link to the archived data products on the ESA Planetary Science Archive (PSA) is provided. At pre-sent the majority of the elevation composites are de-rived from the HRSC Preliminary 200m DTMs gener-ated at the German Aerospace Center (DLR), which will not be available as separately downloadable data products. These DTMs are being progressively super-seded by systematically generated higher resolution archival DTMs, also from DLR, which will become available for download through the PSA, and be simi-larly accessible via HRSCview. At the time of writing this abstract (May 2007), four such high resolution DTMs are available for download via the HRSCview data product pages (for images from orbits 0572, 0905, 1004, and 2039).
Viking image processing. [digital stereo imagery and computer mosaicking
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.
NASA Technical Reports Server (NTRS)
Liebes, S., Jr.
1982-01-01
Half size reproductions are presented of the extensive set of systematic map products generated for the two Mars Viking landing sites from stereo pairs of images radioed back to Earth. The maps span from the immediate foreground to the remote limits of ranging capability, several hundred meters from the spacecraft. The maps are of two kinds - elevation contour and vertical profile. Background and explanatory material important for understanding and utilizing the map collection included covers the Viking Mission, lander locations, lander cameras, the stereo mapping system and input images to this system.
NASA Astrophysics Data System (ADS)
Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura
2017-02-01
Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.
The Colour and Stereo Surface Imaging System (CaSSIS) for the ExoMars Trace Gas Orbiter
Thomas, N.; Cremonese, G.; Ziethe, R.; Gerber, M.; Brändli, M.; Bruno, G.; Erismann, M.; Gambicorti, L.; Gerber, T.; Ghose, K.; Gruber, M.; Gubler, P.; Mischler, H.; Jost, J.; Piazza, D.; Pommerol, A.; Rieder, M.; Roloff, V.; Servonet, A.; Trottmann, W.; Uthaicharoenpong, T.; Zimmermann, C.; Vernani, D.; Johnson, M.; Pelò, E.; Weigel, T.; Viertl, J.; De Roux, N.; Lochmatter, P.; Sutter, G.; Casciello, A.; Hausner, T.; Ficai Veltroni, I.; Da Deppo, V.; Orleanski, P.; Nowosielski, W.; Zawistowski, T.; Szalai, S.; Sodor, B.; Tulyakov, S.; Troznai, G.; Banaskiewicz, M.; Bridges, J.C.; Byrne, S.; Debei, S.; El-Maarry, M. R.; Hauber, E.; Hansen, C.J.; Ivanov, A.; Keszthelyil, L.; Kirk, Randolph L.; Kuzmin, R.; Mangold, N.; Marinangeli, L.; Markiewicz, W. J.; Massironi, M.; McEwen, A.S.; Okubo, Chris H.; Tornabene, L.L.; Wajer, P.; Wray, J.J.
2017-01-01
The Colour and Stereo Surface Imaging System (CaSSIS) is the main imaging system onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO) which was launched on 14 March 2016. CaSSIS is intended to acquire moderately high resolution (4.6 m/pixel) targeted images of Mars at a rate of 10–20 images per day from a roughly circular orbit 400 km above the surface. Each image can be acquired in up to four colours and stereo capability is foreseen by the use of a novel rotation mechanism. A typical product from one image acquisition will be a 9.5 km×∼45 km">9.5 km×∼45 km9.5 km×∼45 km swath in full colour and stereo in one over-flight of the target thereby reducing atmospheric influences inherent in stereo and colour products from previous high resolution imagers. This paper describes the instrument including several novel technical solutions required to achieve the scientific requirements.
Phoenix Laser Beam in Action on Mars
2008-09-30
The Surface Stereo Imager camera aboard NASA Phoenix Mars Lander acquired a series of images of the laser beam in the Martian night sky. Bright spots in the beam are reflections from ice crystals in the low level ice-fog.
Lunar Cartography: Progress in the 2000S and Prospects for the 2010S
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Archinal, B. A.; Gaddis, L. R.; Rosiek, M. R.
2012-08-01
The first decade of the 21st century has seen a new golden age of lunar exploration, with more missions than in any decade since the 1960's and many more nations participating than at any time in the past. We have previously summarized the history of lunar mapping and described the lunar missions planned for the 2000's (Kirk et al., 2006, 2007, 2008). Here we report on the outcome of lunar missions of this decade, the data gathered, the cartographic work accomplished and what remains to be done, and what is known about mission plans for the coming decade. Four missions of lunar orbital reconnaissance were launched and completed in the decade 2001-2010: SMART-1 (European Space Agency), SELENE/Kaguya (Japan), Chang'e-1 (China), and Chandrayaan-1 (India). In addition, the Lunar Reconnaissance Orbiter or LRO (USA) is in an extended mission, and Chang'e-2 (China) operated in lunar orbit in 2010-2011. All these spacecraft have incorporated cameras capable of providing basic data for lunar mapping, and all but SMART-1 carried laser altimeters. Chang'e-1, Chang'e-2, Kaguya, and Chandrayaan-1 carried pushbroom stereo cameras intended for stereo mapping at scales of 120, 10, 10, and 5 m/pixel respectively, and LRO is obtaining global stereo imaging at 100 m/pixel with its Wide Angle Camera (WAC) and hundreds of targeted stereo observations at 0.5 m/pixel with its Narrow Angle Camera (NAC). Chandrayaan-1 and LRO carried polarimetric synthetic aperture radars capable of 75 m/pixel and (LRO only) 7.5 m/pixel imaging even in shadowed areas, and most missions carried spectrometers and imaging spectrometers whose lower resolution data are urgently in need of coregistration with other datasets and correction for topographic and illumination effects. The volume of data obtained is staggering. As one example, the LRO laser altimeter, LOLA, has so far made more than 5.5 billion elevation measurements, and the LRO Camera (LROC) system has returned more than 1.3 million archived image products comprising over 220 Terabytes of image data. The processing of controlled map products from these data is as yet relatively limited. A substantial portion of the LOLA altimetry data have been subjected to a global crossover analysis, and local crossover analyses of Chang'e-1 LAM altimetry have also been performed. LRO NAC stereo digital topographic models (DTMs) and orthomosaics of numerous sites of interest have been prepared based on control to LOLA data, and production of controlled mosaics and DTMs from Mini-RF radar images has begun. Many useful datasets (e.g., DTMs from LRO WAC images and Kaguya Terrain Camera images) are currently uncontrolled. Making controlled, orthorectified map products is obviously a high priority for lunar cartography, and scientific use of the vast multinational set of lunar data now available will be most productive if all observations can be integrated into a single reference frame. To achieve this goal, the key steps required are (a) joint registration and reconciliation of the laser altimeter data from multiple missions, in order to provide the best current reference frame for other products; (b) registration of image datasets (including spectral images and radar, as well as monoscopic and stereo optical images) to one another and the topographic surface from altimetry by bundle adjustment; (c) derivation of higher density topographic models than the altimetry provides, based on the stereo images registered to the altimetric data; and (d) orthorectification and mosaicking of the various datasets based on the dense and consistent topographic model resulting from the previous steps. In the final step, the dense and consistent topographic data will be especially useful for correcting spectrophotometric observations to facilitate mapping of geologic and mineralogic features. We emphasize that, as desirable as short term progress may seem, making mosaics before controlling observations, and controlling observations before a single coordinate reference frame is agreed upon by all participants, are counterproductive and will result in a collection of map products that do not align with one another and thus will not be fully usable for correlative scientific studies. Only a few lunar orbital missions performing remote sensing are projected for the decade 2011-2020. These include the possible further extension of the LRO mission; NASA's GRAIL mission, which is making precise measurements of the lunar gravity field that will likely improve the cartographic accuracy of data from other missions, and the Chandrayaan-2/Luna Resurs mission planned by India and Russia, which includes an orbital remote sensing component. A larger number of surface missions are being discussed for the current decade, including the lander/rover component of Chandrayaan-2/Luna Resurs, Chang'e-3 (China), SELENE-2 (Japan), and privately funded missions inspired by the Google Lunar X-Prize. The US Lunar Precursor Robotic Program was discontinued in 2010, leaving NASA with no immediate plans for robotic or human exploration of the lunar surface, though the MoonRise sample return mission might be reproposed in the future. If the cadence of missions cannot be continued, the desired sequel to the decade of lunar mapping missions 2001-2010 should be a decade of detailed and increasingly multinational analysis of lunar data from 2011 onward.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
Forest Biomass Mapping from Stereo Imagery and Radar Data
NASA Astrophysics Data System (ADS)
Sun, G.; Ni, W.; Zhang, Z.
2013-12-01
Both InSAR and lidar data provide critical information on forest vertical structure, which are critical for regional mapping of biomass. However, the regional application of these data is limited by the availability and acquisition costs. Some researchers have demonstrated potentials of stereo imagery in the estimation of forest height. Most of these researches were conducted on aerial images or spaceborne images with very high resolutions (~0.5m). Space-born stereo imagers with global coverage such as ALOS/PRISM have coarser spatial resolutions (2-3m) to achieve wider swath. The features of stereo images are directly affected by resolutions and the approaches use by most of researchers need to be adjusted for stereo imagery with lower resolutions. This study concentrated on analyzing the features of point clouds synthesized from multi-view stereo imagery over forested areas. The small footprint lidar and lidar waveform data were used as references. The triplets of ALOS/PRISM data form three pairs (forward/nadir, backward/nadir and forward/backward) of stereo images. Each pair of the stereo images can be used to generate points (pixels) with 3D coordinates. By carefully co-register these points from three pairs of stereo images, a point cloud data was generated. The height of each point above ground surface was then calculated using DEM from National Elevation Dataset, USGS as the ground surface elevation. The height data were gridded into pixel of different sizes and the histograms of the points within a pixel were analyzed. The average height of the points within a pixel was used as the height of the pixel to generate a canopy height map. The results showed that the synergy of point clouds from different views were necessary, which increased the point density so the point cloud could detect the vertical structure of sparse and unclosed forests. The top layer of multi-layered forest could be captured but the dense forest prevented the stereo imagery to see through. The canopy height map exhibited spatial patterns of roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlation with RH50 of LVIS data at 30m pixel size with a gain of 1.04, bias of 4.3m and R2 of 0.74 (Fig. 1). The canopy height map from PRISM and dual-pol PALSAR data were used together to map biomass in our study area near Howland, Maine, and the results were evaluated using biomass map generated from LVIS waveform data independently. The results showed that adding CHM from PRISM significantly improved biomass accuracy and raised the biomass saturation level of L-band SAR data in forest biomass mapping.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; BenZvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; HIRES Collaboration
2004-08-01
The High Resolution Fly's Eye (HiRes) experiment is an air fluorescence detector which, operating in stereo mode, has a typical angular resolution of 0.6d and is sensitive to cosmic rays with energies above 1018 eV. The HiRes cosmic-ray detector is thus an excellent instrument for the study of the arrival directions of ultra-high-energy cosmic rays. We present the results of a search for anisotropies in the distribution of arrival directions on small scales (<5°) and at the highest energies (>1019 eV). The search is based on data recorded between 1999 December and 2004 January, with a total of 271 events above 1019 eV. No small-scale anisotropy is found, and the strongest clustering found in the HiRes stereo data is consistent at the 52% level with the null hypothesis of isotropically distributed arrival directions.
After Conquering 'Husband Hill,' Spirit Moves On (Stereo)
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA03062 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA03062 The first explorer ever to scale a summit on another planet, NASA's Mars Exploration Rover Spirit has begun a long trek downward from the top of 'Husband Hill' to new destinations. As shown in this 180-degree panorama from east of the summit, Spirit's earlier tracks are no longer visible. They are off to the west (to the left in this view). Spirit's next destination is 'Haskin Ridge,' straight ahead along the edge of the steep cliff on the right side of this panorama. The scene is a mosaic of images that Spirit took with the navigation camera on the rover's 635th Martian day, or sol, (Oct. 16, 2005) of exploration of Gusev Crater on Mars. This stereo view is presented in a cylindrical-perspective projection with geometric seam correction.Opportunity's Surroundings on Sol 1687 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses. Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction. Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast. This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments
Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal
2017-01-01
The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851
Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica
2015-01-01
Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411
An HTML Tool for Production of Interactive Stereoscopic Compositions.
Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi
2016-12-01
The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.
New Geologic Map of the Scandia Region of Mars
NASA Technical Reports Server (NTRS)
Tanaka, K. L.; Rodriquez, J. A. P.; Skinner, J. A., Jr.; Hayward, R. K.; Fortezzo, C.; Edmundson, K.; Rosiek, M.
2009-01-01
We have begun work on a sophisti-cated digital geologic map of the Scandia region (Fig. 1) at 1:3,000,000 scale based on post-Viking image and to-pographic datasets. Through application of GIS tools, we will produce a map product that will consist of (1) a printed photogeologic map displaying geologic units and relevant modificational landforms produced by tectonism, erosion, and collapse/mass wasting; (2) a landform geoda-tabase including sublayers of key landform types, attributed with direct measurements of their planform and to-pography using Mars Orbiter Laser Altimeter (MOLA) altimetry data and High-Resolution Stereo Camera (HRSC) digital elevation models (DEMs) and various image datasets; and (3) a series of digital, reconstructed paleostratigraphic and paleotopographic maps showing the inferred distribution and topographic form of materi-als and features during past ages
A weighted optimization approach to time-of-flight sensor fusion.
Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger
2014-01-01
Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.
Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007). The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south. This view is presented as a cylindrical-perspective projection with geometric seam correction.WPSS: watching people security services
NASA Astrophysics Data System (ADS)
Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith
2013-10-01
To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
First 3-D Panorama of Spirit Landing Site
2004-01-05
This sprawling look at the martian landscape surrounding the Mars Exploration Rover Spirit is the first 3-D stereo image from the rover navigation camera. Sleepy Hollow can be seen to center left of the image. 3D glasses are necessary.
Phoenix Checks out its Work Area
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image for animation This animation shows a mosaic of images of the workspace reachable by the scoop on the robotic arm of NASA's Phoenix Mars Lander, along with some measurements of rock sizes. Phoenix was able to determine the size of the rocks based on three-dimensional views from stereoscopic images taken by the lander's 7-foot mast camera, called the Surface Stereo Imager. The stereo pair of images enable depth perception, much the way a pair of human eyes enable people to gauge the distance to nearby objects. The rock measurements were made by a visualization tool known as Viz, developed at NASA's Ames Research Laboratory. The shadow cast by the camera on the Martian surface appears somewhat disjointed because the camera took the images in the mosaic at different times of day. Scientists do not yet know the origin or composition of the flat, light-colored rocks on the surface in front of the lander. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Astrophysics Data System (ADS)
Preusker, F.; Oberst, J.; Stark, A.; Burmeister, S.
2018-04-01
We produce high-resolution (222 m/grid element) Digital Terrain Models (DTMs) for Mercury using stereo images from the MESSENGER orbital mission. We have developed a scheme to process large numbers, typically more than 6000, images by photogrammetric techniques, which include, multiple image matching, pyramid strategy, and bundle block adjustments. In this paper, we present models for map quadrangles of the southern hemisphere H11, H12, H13, and H14.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto
2008-03-01
This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.
Barnacle Bill in Super Resolution from Super Panorama
1998-07-03
"Barnacle Bill" is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. This view of Barnacle Bill was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of these rocks and what it might tell us about their mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The composites consist of 7 frames in the right eye and 8 frames in the left eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01409
Performance Evaluation of Dsm Extraction from ZY-3 Three-Line Arrays Imagery
NASA Astrophysics Data System (ADS)
Xue, Y.; Xie, W.; Du, Q.; Sang, H.
2015-08-01
ZiYuan-3 (ZY-3), launched in January 09, 2012, is China's first civilian high-resolution stereo mapping satellite. ZY-3 is equipped with three-line scanners (nadir, backward and forward) for stereo mapping, the resolutions of the panchromatic (PAN) stereo mapping images are 2.1-m at nadir looking and 3.6-m at tilt angles of ±22° forward and backward looking, respectively. The stereo base-height ratio is 0.85-0.95. Compared with stereo mapping from two views images, three-line arrays images of ZY-3 can be used for DSM generation taking advantage of one more view than conventional photogrammetric methods. It would enrich the information for image matching and enhance the accuracy of DSM generated. The primary result of positioning accuracy of ZY-3 images has been reported, while before the massive mapping applications of utilizing ZY-3 images for DSM generation, the performance evaluation of DSM extraction from three-line arrays imagery of ZY-3 has significant meaning for the routine mapping applications. The goal of this research is to clarify the mapping performance of ZY-3 three-line arrays scanners on china's first civilian high-resolution stereo mapping satellite of ZY-3 through the accuracy evaluation of DSM generation. The comparison of DSM product in different topographic areas generated with three views images with different two views combination images of ZY-3 would be presented. Besides the comparison within different topographic study area, the accuracy deviation of the DSM products with different grid size including 25-m, 10-m and 5-m is delineated in order to clarify the impact of grid size on accuracy evaluation.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
Wörgötter, F
1999-10-01
In a stereoscopic system both eyes or cameras have a slightly different view. As a consequence small variations between the projected images exist ("disparities") which are spatially evaluated in order to retrieve depth information. We will show that two related algorithmic versions can be designed which recover disparity. Both approaches are based on the comparison of filter outputs from filtering the left and the right image. The difference of the phase components between left and right filter responses encodes the disparity. One approach uses regular Gabor filters and computes the spatial phase differences in a conventional way as described already in 1988 by Sanger. Novel to this approach, however, is that we formulate it in a way which is fully compatible with neural operations in the visual cortex. The second approach uses the apparently paradoxical similarity between the analysis of visual disparities and the determination of the azimuth of a sound source. Animals determine the direction of the sound from the temporal delay between the left and right ear signals. Similarly, in our second approach we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits video real-time analysis of stereo image sequences (see movies at http://www.neurop.ruhr-uni-bochum.de/Real- Time-Stereo) and a FPGA-based PC-board has been developed which performs stereo-analysis at full PAL resolution in video real-time. An ASIC chip will be available in March 2000.
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
Recurring Lineae on Slopes at Hale Crater, Mars
2015-09-28
Dark, narrow streaks on Martian slopes such as these at Hale Crater are inferred to be formed by seasonal flow of water on contemporary Mars. The streaks are roughly the length of a football field. The imaging and topographical information in this processed, false-color view come from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. These dark features on the slopes are called "recurring slope lineae" or RSL. Planetary scientists using observations with the Compact Reconnaissance Imaging Spectrometer on the same orbiter detected hydrated salts on these slopes at Hale Crater, corroborating the hypothesis that the streaks are formed by briny liquid water. The image was produced by first creating a 3-D computer model (a digital terrain map) of the area based on stereo information from two HiRISE observations, and then draping a false-color image over the land-shape model. The vertical dimension is exaggerated by a factor of 1.5 compared to horizontal dimensions. The camera records brightness in three wavelength bands: infrared, red and blue-green. The draped image is one product from HiRISE observation ESP_03070_1440. http://photojournal.jpl.nasa.gov/catalog/PIA19916
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.
2017-09-01
Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.
Solid state electro-optic color filter and iris
NASA Technical Reports Server (NTRS)
1975-01-01
A pair of solid state electro-optic filters (SSEF) in a binocular holder were designed and fabricated for evaluation of field sequential stereo TV applications. The electronic circuitry for use with the stereo goggles was designed and fabricated, requiring only an external video input. A polarizing screen suitable for attachment to various size TV monitors for use in conjunction with the stereo goggles was designed and fabricated. An improved engineering model 2 filter was fabricated using the bonded holder technique developed previously and integrated to a GCTA color TV camera. An engineering model color filter was fabricated and assembled using PLZT control elements. In addition, a ruggedized holder assembly was designed, fabricated and tested. This assembly provides electrical contacts, high voltage protection, and support for the fragile PLZT disk, and also permits mounting and optical alignment of the associated polarizers.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2 This navigation camera mosaic, created from images taken by NASA's Mars Exploration Rover Opportunity on sols 115 and 116 (May 21 and 22, 2004) provides a dramatic view of 'Endurance Crater.' The rover engineering team carefully plotted the safest path into the football field-sized crater, eventually easing the rover down the slopes around sol 130 (June 12, 2004). To the upper left of the crater sits the rover's protective heatshield, which sheltered Opportunity as it passed through the martian atmosphere. The 360-degree, stereo view is presented in a cylindrical-perspective projection, with geometric and radiometric seam correction. Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.Building Change Detection in Very High Resolution Satellite Stereo Image Time Series
NASA Astrophysics Data System (ADS)
Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.
2016-06-01
There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.
2013-10-01
The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.
The MVACS Surface Stereo Imager on Mars Polar Lander
NASA Astrophysics Data System (ADS)
Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.
2001-08-01
The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Detecting blind building façades from highly overlapping wide angle aerial imagery
NASA Astrophysics Data System (ADS)
Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas
2014-10-01
This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.
Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction
2011-01-01
Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed
Automated generation of image products for Mars Exploration Rover Mission tactical operations
NASA Technical Reports Server (NTRS)
Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen
2005-01-01
This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-12-22
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
Development of a teaching system for an industrial robot using stereo vision
NASA Astrophysics Data System (ADS)
Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki
1997-12-01
The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.
The Modular Optical Underwater Survey System
Amin, Ruhul; Richards, Benjamin L.; Misa, William F. X. E.; Taylor, Jeremy C.; Miller, Dianna R.; Rollo, Audrey K.; Demarke, Christopher; Ossolinski, Justin E.; Reardon, Russell T.; Koyanagi, Kyle H.
2017-01-01
The Pacific Islands Fisheries Science Center deploys the Modular Optical Underwater Survey System (MOUSS) to estimate the species-specific, size-structured abundance of commercially-important fish species in Hawaii and the Pacific Islands. The MOUSS is an autonomous stereo-video camera system designed for the in situ visual sampling of fish assemblages. This system is rated to 500 m and its low-light, stereo-video cameras enable identification, counting, and sizing of individuals at a range of 0.5–10 m. The modular nature of MOUSS allows for the efficient and cost-effective use of various imaging sensors, power systems, and deployment platforms. The MOUSS is in use for surveys in Hawaii, the Gulf of Mexico, and Southern California. In Hawaiian waters, the system can effectively identify individuals to a depth of 250 m using only ambient light. In this paper, we describe the MOUSS’s application in fisheries research, including the design, calibration, analysis techniques, and deployment mechanism. PMID:29019962
NASA Astrophysics Data System (ADS)
Kelly, M. A.; Boldt, J.; Wilson, J. P.; Yee, J. H.; Stoffler, R.
2017-12-01
The multi-spectral STereo Atmospheric Remote Sensing (STARS) concept has the objective to provide high-spatial and -temporal-resolution observations of 3D cloud structures related to hurricane development and other severe weather events. The rapid evolution of severe weather demonstrates a critical need for mesoscale observations of severe weather dynamics, but such observations are rare, particularly over the ocean where extratropical and tropical cyclones can undergo explosive development. Coincident space-based measurements of wind velocity and cloud properties at the mesoscale remain a great challenge, but are critically needed to improve the understanding and prediction of severe weather and cyclogenesis. STARS employs a mature stereoscopic imaging technique on two satellites (e.g. two CubeSats, two hosted payloads) to simultaneously retrieve cloud motion vectors (CMVs), cloud-top temperatures (CTTs), and cloud geometric heights (CGHs) from multi-angle, multi-spectral observations of cloud features. STARS is a pushbroom system based on separate wide-field-of-view co-boresighted multi-spectral cameras in the visible, midwave infrared (MWIR), and longwave infrared (LWIR) with high spatial resolution (better than 1 km). The visible system is based on a pan-chromatic, low-light imager to resolve cloud structures under nighttime illumination down to ¼ moon. The MWIR instrument, which is being developed as a NASA ESTO Instrument Incubator Program (IIP) project, is based on recent advances in MWIR detector technology that requires only modest cooling. The STARS payload provides flexible options for spaceflight due to its low size, weight, power (SWaP) and very modest cooling requirements. STARS also meets AF operational requirements for cloud characterization and theater weather imagery. In this paper, an overview of the STARS concept, including the high-level sensor design, the concept of operations, and measurement capability will be presented.
Future directions in 3-dimensional imaging and neurosurgery: stereoscopy and autostereoscopy.
Christopher, Lauren A; William, Albert; Cohen-Gadol, Aaron A
2013-01-01
Recent advances in 3-dimensional (3-D) stereoscopic imaging have enabled 3-D display technologies in the operating room. We find 2 beneficial applications for the inclusion of 3-D imaging in clinical practice. The first is the real-time 3-D display in the surgical theater, which is useful for the neurosurgeon and observers. In surgery, a 3-D display can include a cutting-edge mixed-mode graphic overlay for image-guided surgery. The second application is to improve the training of residents and observers in neurosurgical techniques. This article documents the requirements of both applications for a 3-D system in the operating room and for clinical neurosurgical training, followed by a discussion of the strengths and weaknesses of the current and emerging 3-D display technologies. An important comparison between a new autostereoscopic display without glasses and current stereo display with glasses improves our understanding of the best applications for 3-D in neurosurgery. Today's multiview autostereoscopic display has 3 major benefits: It does not require glasses for viewing; it allows multiple views; and it improves the workflow for image-guided surgery registration and overlay tasks because of its depth-rendering format and tools. Two current limitations of the autostereoscopic display are that resolution is reduced and depth can be perceived as too shallow in some cases. Higher-resolution displays will be available soon, and the algorithms for depth inference from stereo can be improved. The stereoscopic and autostereoscopic systems from microscope cameras to displays were compared by the use of recorded and live content from surgery. To the best of our knowledge, this is the first report of application of autostereoscopy in neurosurgery.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Farewell to the Earth and the Moon -ESA's Mars Express successfully tests its instruments
NASA Astrophysics Data System (ADS)
2003-07-01
The routine check-outs of Mars Express's instruments and of the Beagle-2 lander, performed during the last weeks, have been very successful. "As in all space missions little problems have arisen, but they have been carefully evaluated and solved. Mars Express continues on its way to Mars performing beautifully", comments Chicarro. The views of the Earth/Moon system were taken on 3 July 2003 by Mars Express's High Resolution Stereo Camera (HRSC), when the spacecraft was 8 million kilometres from Earth. The image taken shows true colours; the Pacific Ocean appears in blue, and the clouds near the Equator and in mid to northern latitudes in white to light grey. The image was processed by the Instrument Team at the Institute of Planetary Research of DLR, Berlin (Germany). It was built by combining a super resolution black and white HRSC snap-shot image of the Earth and the Moon with colour information obtained by the blue, green, and red sensors of the instrument. “The pictures and the information provided by the data prove the camera is working very well. They provide a good indication of what to expect once the spacecraft is in its orbit around Mars, at altitudes of only 250-300 kilometres: very high resolution images with brilliant true colour and in 3D,” says the Principal Investigator of the HRSC, Gerhard Neukum, of the Freie Universität of Berlin (Germany). This camera will be able to distinguish details of up to 2 metres on the Martian surface. Another striking demonstration of Mars Express's instruments high performance are the data taken by the OMEGA spectrometer. Once at Mars, this instrument will provide the best map of the molecular and mineralogical composition of the whole planet, with 5% of the planetary surface in high resolution. Minerals and other compounds such as water will be charted as never before. As the Red Planet is still too far away, the OMEGA team devised an ingenious test for their instrument: to detect the Earth’s surface components. As expected, OMEGA made a direct and unambiguous detection of major and minor constituents of the Earth’s atmosphere, such as molecular oxygen, water and carbon dioxide, ozone and methane, among other molecules. "The sensitivity demonstrated by OMEGA on these Earth spectra should reveal really minute amounts of water in both Martian surface materials and atmosphere," says the Principal Investigator of OMEGA, Jean Pierre Bibring , from the Institut d'Astrophysique Spatiale, Orsay, France. The experts will carry on testing Mars Express’s instruments up till the arrival to the Red Planet, next December. The scientists agree on the fact that these instruments will enormously increase our understanding of the morphology and topography of the Martian surface, of the geological structures and processes - active now and in the past, and eventually of Mars’s geological evolution. With such tools, Mars Express is also able to address the important “water” question, namely how much water there is today and how much there was in the past. Ultimately, this will also tell us whether Mars had environmental conditions that could favour the evolution of life. Note to editors The Mars Express High Resolution Stereo Camera (HRSC) was developed by the German Aerospace Centre (DLR) and built by EADS-Astrium GmbH in Friedrichshafen, Germany. The Mars Express OMEGA spectrometer was developed and built by the Institut d'Astrophysique Spatiale, Orsay, France, in cooperation with LESIA at Meudon/Paris Observatory, France, IFSI in Frascati, Italy, and IKI in Moscow, Russia.
Opportunity's View After Drive on Sol 1806 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity's View After Long Drive on Sol 1770 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini. The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity View During Exploration in 'Duck Bay,' Sols 1506-1510 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11787 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11787 NASA Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings on the 1,506th through 1,510th Martian days, or sols, of Opportunity's mission on Mars (April 19-23, 2008). North is at the top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The site is within an alcove called 'Duck Bay' in the western portion of Victoria Crater. Victoria Crater is about 800 meters (half a mile) wide. Opportunity had descended into the crater at the top of Duck Bay 7 months earlier. By the time the rover acquired this view, it had examined rock layers inside the rim. Opportunity was headed for a closer look at the base of a promontory called 'Cape Verde,' the cliff at about the 2-o'clock position of this image, before leaving Victoria. The face of Cape Verde is about 6 meters (20 feet) tall. Just clockwise from Cape Verde is the main bowl of Victoria Crater, with sand dunes at the bottom. A promontory called 'Cabo Frio,' at the southern side of Duck Bay, stands near the 6-o'clock position of the image. This view is presented as a cylindrical-perspective projection with geometric seam correction.X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.
Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young
2016-04-01
In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars
Head, J.W.; Neukum, G.; Jaumann, R.; Hiesinger, H.; Hauber, E.; Carr, M.; Masson, P.; Foing, B.; Hoffmann, H.; Kreslavsky, M.; Werner, S.; Milkovich, S.; Van Gasselt, S.
2005-01-01
Images from the Mars Express HRSC (High-Resolution Stereo Camera) of debris aprons at the base of massifs in eastern Hellas reveal numerous concentrically ridged lobate and pitted features and related evidence of extremely ice-rich glacier-like viscous flow and sublimation. Together with new evidence for recent ice-rich rock glaciers at the base of the Olympus Mons scarp superposed on larger Late Amazonian debris-covered piedmont glaciers, we interpret these deposits as evidence for geologically recent and recurring glacial activity in tropical and mid-latitude regions of Mars during periods of increased spin-axis obliquity when polar ice was mobilized and redeposited in microenvironments at lower latitudes. The data indicate that abundant residual ice probably remains in these deposits and that these records of geologically recent climate changes are accessible to future automated and human surface exploration.
Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars.
Head, J W; Neukum, G; Jaumann, R; Hiesinger, H; Hauber, E; Carr, M; Masson, P; Foing, B; Hoffmann, H; Kreslavsky, M; Werner, S; Milkovich, S; van Gasselt, S
2005-03-17
Images from the Mars Express HRSC (High-Resolution Stereo Camera) of debris aprons at the base of massifs in eastern Hellas reveal numerous concentrically ridged lobate and pitted features and related evidence of extremely ice-rich glacier-like viscous flow and sublimation. Together with new evidence for recent ice-rich rock glaciers at the base of the Olympus Mons scarp superposed on larger Late Amazonian debris-covered piedmont glaciers, we interpret these deposits as evidence for geologically recent and recurring glacial activity in tropical and mid-latitude regions of Mars during periods of increased spin-axis obliquity when polar ice was mobilized and redeposited in microenvironments at lower latitudes. The data indicate that abundant residual ice probably remains in these deposits and that these records of geologically recent climate changes are accessible to future automated and human surface exploration.
NASA Technical Reports Server (NTRS)
Koehler, U.; Neukum, G.; Gasselt, S. v.; Jaumann, R.; Roatsch, Th.; Hoffmann, H.; Zender, J.; Acton, C.; Drigani, F.
2005-01-01
During the first year of operation, corresponding to the calendar year 2004, the HRSC imaging experiment onboard ESA's Mars Express mission recorded 23 Gigabyte of 8-bit compressed raw data. After processing, the amount of data increased to more than 344 Gigabyte of decompressed and radiometrically calibrated scientifically useable image products. Every six months these HRSC Level 2 data are fed into ESA's Planetary Science Archive (PSA) that sends all data also to the Planetary Data System (PDS) to ensure easy availability to the interested user. On their respective web portals, the European Space Agency published in cooperation with the Principal Investigator-Group at Freie Universitat Berlin and the German Space Agency (DLR) almost 40 sets of high-level image scenes and movies for PR needs that have been electronically visited many hundred thousand times.
Design and test of a high performance off-axis TMA telescope
NASA Astrophysics Data System (ADS)
Fan, Bin; Cai, Wei-jun; Huang, Ying
2017-11-01
A new complete Optical Demonstration Model (ODM) of high performance off-axis Three Mirror Anastigmatic (TMA) telescope has been successfully developed in BISME. This 1.75-m focal length, 1/9 relative aperture, 6.2°×1.0°field of view visible telescope, which uses the TDICCD detectors of 7μm pixel size, can provide 2.0-m ground sampling distance and 51-km swath from an altitude of 500 km. With some significant efforts, the main goals of the ODM have been reached: a compact lightweight design while realizing high performance and high stability. The optical system and key technologies have been applied in the multispectral camera of ZY-3 Satellite (the first high resolution stereo mapping satellite of China), which was successfully launched on January 9th, 2012. The main technology of ODM was described. The test results and applications were outlined.
WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise
2017-10-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.
Forest abovegroundbiomass mapping using spaceborne stereo imagery acquired by Chinese ZY-3
NASA Astrophysics Data System (ADS)
Sun, G.; Ni, W.; Zhang, Z.; Xiong, C.
2015-12-01
Besides LiDAR data, another valuable type of data which is also directly sensitive to forest vertical structures and more suitable for regional mapping of forest biomass is the stereo imagery or photogrammetry. Photogrammetry is the traditional technique for deriving terrain elevation. The elevation of the top of a tree canopy can be directly measured from stereo imagery but winter images are required to get the elevation of ground surface because stereo images are acquired by optical sensors which cannot penetrate dense forest canopies with leaf-on condition. Several spaceborne stereoscopic systems with higher spatial resolutions have been launched in the past several years. For example the Chinese satellite Zi Yuan 3 (ZY-3) specifically designed for the collection of stereo imagery with a resolution of 3.6 m for forward and backward views and 2.1 m for the nadir view was launched on January 9, 2012. Our previous studies have demonstrated that the spaceborne stereo imagery acquired in summer has good performance on the description of forest structures. The ground surface elevation could be extracted from spaceborne stereo imagery acquired in winter. This study mainly focused on assessing the mapping of forest biomass through the combination of spaceborne stereo imagery acquired in summer and those in winter. The test sites of this study located at Daxing AnlingMountains areas as shown in Fig.1. The Daxing Anling site is on the south border of boreal forest belonging to frigid-temperate zone coniferous forest vegetation The dominant tree species is Dhurian larch (Larix gmelinii). 10 scenes of ZY-3 stereo images are used in this study. 5 scenes were acquired on March 14,2012 while the other 5 scenes were acquired on September 7, 2012. Their spatial coverage is shown in Fig.2-a. Fig.2-b is the mosaic of nadir images acquired on 09/07/2012 while Fig.2-c is thecorresponding digital surface model (DSM) derived from stereo images acquired on 09/07/2012. Fig.2-d is the difference between the DSM derived from stereo imagery acquired on 09/07/2012 and the digital elevation model (DEM) from stereo imagery acquired on 03/14/2012.The detailed analysis will be given in the final report.
Bayes filter modification for drivability map estimation with observations from stereo vision
NASA Astrophysics Data System (ADS)
Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri
2017-02-01
Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Stereo particle image velocimetry set up for measurements in the wake of scaled wind turbines
NASA Astrophysics Data System (ADS)
Campanardi, Gabriele; Grassi, Donato; Zanotti, Alex; Nanos, Emmanouil M.; Campagnolo, Filippo; Croce, Alessandro; Bottasso, Carlo L.
2017-08-01
Stereo particle image velocimetry measurements were carried out in the boundary layer test section of Politecnico di Milano large wind tunnel to survey the wake of a scaled wind turbine model designed and developed by Technische Universität München. The stereo PIV instrumentation was set up to survey the three velocity components on cross-flow planes at different longitudinal locations. The area of investigation covered the entire extent of the wind turbines wake that was scanned by the use of two separate traversing systems for both the laser and the cameras. Such instrumentation set up enabled to gain rapidly high quality results suitable to characterise the behaviour of the flow field in the wake of the scaled wind turbine. This would be very useful for the evaluation of the performance of wind farm control methodologies based on wake redirection and for the validation of CFD tools.
NASA Astrophysics Data System (ADS)
Thomas, Edward; Williams, Jeremiah; Silver, Jennifer
2004-11-01
Over the past five years, the Auburn Plasma Sciences Laboratory (PSL) has applied two-dimensional particle image velocimetry (2D-PIV) techniques [E. Thomas, Phys. Plasmas, 6, 2672 (1999)] to make measurements of particle transport in dusty plasmas. Although important information was obtained from these earlier studies, the complex behavior of the charged microparticles clearly indicated that three-dimensional velocity information is needed. The PSL has recently acquired and installed a stereoscopic PIV (stereo-PIV) diagnostic tool for dusty plasma investigations [E. Thomas. et al, Phys. Plasmas, L37 (2004)]. It employs a synchronized dual-laser, dual-camera system for measuring particle transport in three dimensions. Results will be presented on the initial application of stereo-PIV to dusty plasma studies. Additional results will be presented on the use of stereo-PIV for measuring the controlled interaction of two dust clouds.
NASA Astrophysics Data System (ADS)
Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven
2016-06-01
Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. For result verification these data mining techniques are then being employed within a citizen science project within the Zooniverse family. Examples of data mining and its verification will be presented.
Identifying Surface Changes on HRSC Images of the Mars South Polar Residual CAP (sprc)
NASA Astrophysics Data System (ADS)
Putri, Alfiah Rizky Diana; Sidiropoulos, Panagiotis; Muller, Jan-Peter
2016-06-01
The surface of Mars has been an object of interest for planetary research since the launch of Mariner 4 in 1964. Since then different cameras such as the Viking Visual Imaging Subsystem (VIS), Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC), and Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) have been imaging its surface at ever higher resolution. The High Resolution Stereo Camera (HRSC) on board of the European Space Agency (ESA) Mars Express, has been imaging the Martian surface, since 25th December 2003 until the present-day. HRSC has covered 100 % of the surface of Mars, about 70 % of the surface with panchromatic images at 10-20 m/pixel, and about 98 % at better than 100 m/pixel (Neukum et. al., 2004), including the polar regions of Mars. The Mars polar regions have been studied intensively recently by analysing images taken by the Mars Express and MRO missions (Plaut et al., 2007). The South Polar Residual Cap (SPRC) does not change very much in volume overall but there are numerous examples of dynamic phenomena associated with seasonal changes in the atmosphere. In particular, we can examine the time variation of layers of solid carbon dioxide and water ice with dust deposition (Bibring, 2004), spider-like channels (Piqueux et al., 2003) and so-called Swiss Cheese Terrain (Titus et al., 2004). Because of seasonal changes each Martian year, due to the sublimation and deposition of water and CO2 ice on the Martian south polar region, clearly identifiable surface changes occur in otherwise permanently icy region. In this research, good quality HRSC images of the Mars South Polar region are processed based on previous identification as the optimal coverage of clear surfaces (Campbell et al., 2015). HRSC images of the Martian South Pole are categorized in terms of quality, time, and location to find overlapping areas, processed into high quality Digital Terrain Models (DTMs) and Orthorectified Images (ORIs) and projected into polar stereographic projection using DLR (Deutsches Zentrum für Luft- und Raumfahrt; German Aerospace Center)'s VICAR and GIS software with modifications developed by Kim & Muller (2009). Surface changes are identified in the Mars SPRC region and analysed based on their appearance in the HRSC images.
Maximum likelihood estimation in calibrating a stereo camera setup.
Muijtjens, A M; Roos, J M; Arts, T; Hasman, A
1999-02-01
Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.
Stereo Images of Wind Tails Near Chimp
NASA Technical Reports Server (NTRS)
1997-01-01
This stereo image pair of the rock 'Chimp' was taken by the Sojourner rover's front cameras on Sol 72 (September 15). Fine-scale texture on Chimp and other rocks is clearly visible. Wind tails, oriented from lower right to upper left, are seen next to small pebbles in the foreground. These were most likely produced by wind action.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Surface Stereo Imager on Mars, Side View
NASA Technical Reports Server (NTRS)
2008-01-01
This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
NASA Astrophysics Data System (ADS)
Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif
2016-10-01
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
Correcting spacecraft jitter in HiRISE images
Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).
Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.
2008-12-01
The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
First light of Cassis: the stereo surface imaging system onboard the exomars TGO
NASA Astrophysics Data System (ADS)
Gambicorti, L.; Piazza, D.; Pommerol, A.; Roloff, V.; Gerber, M.; Ziethe, R.; El-Maarry, M. R.; Weigel, T.; Johnson, M.; Vernani, D.; Pelo, E.; Da Deppo, V.; Cremonese, G.; Ficai Veltroni, I.; Thomas, N.
2017-09-01
The Colour and Stereo Surface Imaging System (CaSSIS) camera was launched on 14 March 2016 onboard the ExoMars Trace Gas Orbiter (TGO) and it is currently in cruise to Mars. The CaSSIS high resolution optical system is based on a TMA telescope (Three Mirrors Anastigmatic configuration) with a 4th powered folding mirror compacting the CFRP (Carbon Fiber Reinforced Polymer) structure. The camera EPD (Entrance Pupil Diameter) is 135 mm and the focal length is 880 mm, giving an F# 6.5 system; the wavelength range covered by the instrument is 400-1100 nm. The optical system is designed to have distortion of less than 2%, and a worst case Modulation Transfer Function (MTF) of 0.3 at the detector Nyquist spatial frequency (i.e. 50 lp/mm). The Focal Plane Assembly (FPA), including the detector, is a spare from the Simbio-Sys instrument of the Italian Space Agency (ASI). Simbio-Sys will fly on ESA's BepiColombo mission to Mercury in 2018. The detector, developed by Raytheon Vision Systems, is a 2k×2k hybrid Si-PIN array with 10 μm-pixel pitch. The detector allows snap shot operation at a read-out rate of 5 Mpx/s with 14-bit resolution. CaSSIS will operate in a push-frame mode with a Filter Strip Assembly (FSA), placed directly above the detector sensitive area, selecting 4 colour bands. The scale at a slant angle of 4.6 m/px from the nominal orbit is foreseen to produce frames of 9.4 km × 6.3 km on the Martian surface, and covering a Field of View (FoV) of 1.33° cross track × 0.88° along track. The University of Bern was in charge of the full instrument integration as well as the characterisation of the focal plane of CaSSIS. The paper will present an overview of CaSSIS and the optical performance of the telescope and the FPA. The preliminary results of the on-ground calibration campaign and the first light obtained during the commissioning and pointing campaign (April 2016) will be described in detail. The instrument is acquiring images with an average Point Spread Function at Full-Width-Half-Maximum (PSF FWHM) of < 1.5 px, as expected.
NASA Astrophysics Data System (ADS)
Gupta, S.; Barnes, R.; Ortner, T.; Huber, B.; Paar, G.; Muller, J. P.; Giordano, M.; Willner, K.; Traxler, C.; Juhart, K.; Fritz, L.; Hesina, G.; Tasdelen, E.
2015-12-01
NASA's Mars Exploration Rovers (MER) and Mars Science Laboratory Curiosity Rover (MSL) are proxies for field geologists on Mars, taking high resolution imagery of rock formations and landscapes which is analysed in detail on Earth. Panoramic digital cameras (PanCam on MER and MastCam on MSL) are used for characterising the geology of rock outcrops along rover traverses. A key focus is on sedimentary rocks that have the potential to contain evidence for ancient life on Mars. Clues to determine ancient sedimentary environments are preserved in layer geometries, sedimentary structures and grain size distribution. The panoramic camera systems take stereo images which are co-registered to create 3D point clouds of rock outcrops to be quantitatively analysed much like geologists would do on Earth. The EU FP7 PRoViDE project is compiling all Mars rover vision data into a database accessible through a web-GIS (PRoGIS) and 3D viewer (PRo3D). Stereo-imagery selected in PRoGIS can be rendered in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features. Dip and strike is calculated within PRo3D from mapped bedding contacts and fracture traces. Results from multiple outcrops can be integrated in PRoGIS to gain a detailed understanding of the geological features within an area. These tools have been tested on three case studies; Victoria Crater, Yellowknife Bay and Shaler. Victoria Crater, in the Meridiani Planum region of Mars, was visited by the MER-B Opportunity Rover. Erosional widening of the crater produced <15 m high outcrops which expose ancient Martian eolian bedforms. Yellowknife Bay and Shaler were visited in the early stages of the MSL mission, and provide excellent opportunities to characterise Martian fluvio-lacustrine sedimentary features. Development of these tools is crucial to exploitation of vision data from future missions, such as the 2018 ExoMars Rover and the NASA 2020 mission. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.
NASA Astrophysics Data System (ADS)
Arvidson, R. E.; Squyres, S. W.; Baumgartner, E. T.; Schenker, P. S.; Niebur, C. S.; Larsen, K. W.; SeelosIV, F. P.; Snider, N. O.; Jolliff, B. L.
2002-08-01
The Field Integration Design and Operations (FIDO) prototype Mars rover was deployed and operated remotely for 2 weeks in May 2000 in the Black Rock Summit area of Nevada. The blind science operation trials were designed to evaluate the extent to which FIDO-class rovers can be used to conduct traverse science and collect samples. FIDO-based instruments included stereo cameras for navigation and imaging, an infrared point spectrometer, a color microscopic imager for characterization of rocks and soils, and a rock drill for core acquisition. Body-mounted ``belly'' cameras aided drill deployment, and front and rear hazard cameras enabled terrain hazard avoidance. Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, a high spatial resolution IKONOS orbital image, and a suite of descent images were used to provide regional- and local-scale terrain and rock type information, from which hypotheses were developed for testing during operations. The rover visited three sites, traversed 30 m, and acquired 1.3 gigabytes of data. The relatively small traverse distance resulted from a geologically rich site in which materials identified on a regional scale from remote-sensing data could be identified on a local scale using rover-based data. Results demonstrate the synergy of mapping terrain from orbit and during descent using imaging and spectroscopy, followed by a rover mission to test inferences and to make discoveries that can be accomplished only with surface mobility systems.
Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters
NASA Astrophysics Data System (ADS)
Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai
2016-04-01
Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion
NASA Technical Reports Server (NTRS)
2003-01-01
Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-01-01
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524
Acquisition of stereo panoramas for display in VR environments
NASA Astrophysics Data System (ADS)
Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan
2011-03-01
Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.
Mapping snow depth in open alpine terrain from stereo satellite imagery
NASA Astrophysics Data System (ADS)
Marti, R.; Gascoin, S.; Berthier, E.; de Pinel, M.; Houet, T.; Laffly, D.
2016-07-01
To date, there is no definitive approach to map snow depth in mountainous areas from spaceborne sensors. Here, we examine the potential of very-high-resolution (VHR) optical stereo satellites to this purpose. Two triplets of 0.70 m resolution images were acquired by the Pléiades satellite over an open alpine catchment (14.5 km2) under snow-free and snow-covered conditions. The open-source software Ame's Stereo Pipeline (ASP) was used to match the stereo pairs without ground control points to generate raw photogrammetric clouds and to convert them into high-resolution digital elevation models (DEMs) at 1, 2, and 4 m resolutions. The DEM differences (dDEMs) were computed after 3-D coregistration, including a correction of a -0.48 m vertical bias. The bias-corrected dDEM maps were compared to 451 snow-probe measurements. The results show a decimetric accuracy and precision in the Pléiades-derived snow depths. The median of the residuals is -0.16 m, with a standard deviation (SD) of 0.58 m at a pixel size of 2 m. We compared the 2 m Pléiades dDEM to a 2 m dDEM that was based on a winged unmanned aircraft vehicle (UAV) photogrammetric survey that was performed on the same winter date over a portion of the catchment (3.1 km2). The UAV-derived snow depth map exhibits the same patterns as the Pléiades-derived snow map, with a median of -0.11 m and a SD of 0.62 m when compared to the snow-probe measurements. The Pléiades images benefit from a very broad radiometric range (12 bits), allowing a high correlation success rate over the snow-covered areas. This study demonstrates the value of VHR stereo satellite imagery to map snow depth in remote mountainous areas even when no field data are available.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, C; Xing, L; Yu, S
Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth cameramore » (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.« less
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Human Stereopsis is not Limited by the Optics of the Well-focused Eye
Vlaskamp, Björn N.S.; Yoon, Geunyoung; Banks, Martin S.
2011-01-01
Human stereopsis—the perception of depth from differences in the two eyes’ images—is very precise: Image differences smaller than a single photoreceptor can be converted into a perceived difference in depth. To better understand what determines this precision, we examined how the eyes’ optics affects stereo resolution. We did this by comparing performance with normal, well-focused optics and with optics improved by eliminating chromatic aberration and correcting higher-order aberrations. We first measured luminance contrast sensitivity in both eyes and showed that we had indeed improved optical quality significantly. We then measured stereo resolution in two ways: by finding the finest corrugation in depth that one can perceive, and by finding the smallest disparity one can perceive as different from zero. Our optical manipulation had no effect on stereo performance. We checked this by redoing the experiments at low contrast and again found no effect of improving optical quality. Thus, the resolution of human stereopsis is not limited by the optics of the well-focused eye. We discuss the implications of this remarkable finding. PMID:21734272
Endoscopic laser range scanner for minimally invasive, image guided kidney surgery
NASA Astrophysics Data System (ADS)
Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.
2013-03-01
Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.
A New Lunar Digital Elevation Model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera
NASA Technical Reports Server (NTRS)
Barker, M. K.; Mazarico, E.; Neumann, G. A.; Zuber, M. T.; Haruyama, J.; Smith, D. E.
2015-01-01
We present an improved lunar digital elevation model (DEM) covering latitudes within +/-60 deg, at a horizontal resolution of 512 pixels per degree ( approx.60 m at the equator) and a typical vertical accuracy approx.3 to 4 m. This DEM is constructed from approx.4.5 ×10(exp 9) geodetically-accurate topographic heights from the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter, to which we co-registered 43,200 stereo-derived DEMs (each 1 deg×1 deg) from the SELENE Terrain Camera (TC) ( approx.10(exp 10) pixels total). After co-registration, approximately 90% of the TC DEMs show root-mean-square vertical residuals with the LOLA data of < 5 m compared to approx.50% prior to co-registration. We use the co-registered TC data to estimate and correct orbital and pointing geolocation errors from the LOLA altimetric profiles (typically amounting to < 10 m horizontally and < 1 m vertically). By combining both co-registered datasets, we obtain a near-global DEM with high geodetic accuracy, and without the need for surface interpolation. We evaluate the resulting LOLA + TC merged DEM (designated as "SLDEM2015") with particular attention to quantifying seams and crossover errors.
AOTF near-IR spectrometers for study of Lunar and Martian surface composition
NASA Astrophysics Data System (ADS)
Ivanov, A.; Korablev, O.; Mantsevich, S.; Vyazovetskiy, N.; Fedorova, A.; Evdokimova, N.; Stepanov, A.; Titov, A.; Kalinnikov, Y.; Kuzmin, R.; Kiselev, A.; Bazilevsky, A.; Bondarenko, A.; Dokuchaev, I.; Moiseev, P.; Victorov, A.; Berezhnoy, A.; Skorov, Y.; Bisikalo, D.; Velikodsky, Y.
2014-04-01
The series of the AOTF near-IR spectrometers is developed in Moscow Space Research Institute for study of Lunar and Martian surface composition in the vicinity of a lander or a rover. Lunar Infrared Spectrometer (LIS) is an experiment onboard Luna-Glob (launch in 2017) and Luna- Resurs (launch in 2019) Russian surface missions. It's a pencil-beam spectrometer to be pointed by a robotic arm of the landing module. The instrument's field of view (FOV) of 1° is co-aligned with the FOV(45°) of a stereo TV camera. Infrared Spectrometer for ExoMars (ISEM) is an experiment onboard ExoMars (launch in 2018) ESARoscosmos rover. It's spectrometer based on LIS with required redesign for ExoMars mission. The ISEM instrument is mounted on the rover's mast coaligned with the FOV (5°) of High Resolution camera (HRC). Spectrometers and are intended for study of the surface composition in the vicinity of the lander and rover. The spectrometers will provide measurements of selected surface areas in the spectral range of 1.15-3.3 μm. The spectral selection is provided by acoustooptic tunable filter (AOTF), which scans the spectral range sequentially. Electrical command of the AOTF allows selecting the spectral sampling, and permits a random access if needed.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.; Bennett, Mark S.
1993-01-01
A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.
NASA Astrophysics Data System (ADS)
Schroeder, Walter; Schulze, Wolfram; Wetter, Thomas; Chen, Chi-Hsien
2008-08-01
Three-dimensional (3D) body surface reconstruction is an important field in health care. A popular method for this purpose is laser scanning. However, using Photometric Stereo (PS) to record lumbar lordosis and the surface contour of the back poses a viable alternative due to its lower costs and higher flexibility compared to laser techniques and other methods of three-dimensional body surface reconstruction. In this work, we extended the traditional PS method and proposed a new method for obtaining surface and volume data of a moving object. The principle of traditional Photometric Stereo uses at least three images of a static object taken under different light sources to obtain 3D information of the object. Instead of using normal light, the light sources in the proposed method consist of the RGB-Color-Model's three colors: red, green and blue. A series of pictures taken with a video camera can now be separated into the different color channels. Each set of the three images can then be used to calculate the surface normals as a traditional PS. This method waives the requirement that the object imaged must be kept still as in almost all the other body surface reconstruction methods. By putting two cameras opposite to a moving object and lighting the object with the colored light, the time-varying surface (4D) data can easily be calculated. The obtained information can be used in many medical fields such as rehabilitation, diabetes screening or orthopedics.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-09-14
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-01-01
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178
Structured light stereo catadioptric scanner based on a spherical mirror
NASA Astrophysics Data System (ADS)
Barone, S.; Neri, P.; Paoli, A.; Razionale, A. V.
2018-08-01
The present paper describes the development and characterization of a structured light stereo catadioptric scanner for the omnidirectional reconstruction of internal surfaces. The proposed approach integrates two digital cameras, a multimedia projector and a spherical mirror, which is used to project the structured light patterns generated by the light emitter and, at the same time, to reflect into the cameras the modulated fringe patterns diffused from the target surface. The adopted optical setup defines a non-central catadioptric system, thus relaxing any geometrical constraint in the relative placement between optical devices. An analytical solution for the reflection on a spherical surface is proposed with the aim at modelling forward and backward projection tasks for a non-central catadioptric setup. The feasibility of the proposed active catadioptric scanner has been verified by reconstructing various target surfaces. Results demonstrated a great influence of the target surface distance from the mirror's centre on the measurement accuracy. The adopted optical configuration allows the definition of a metrological 3D scanner for surfaces disposed within 120 mm from the mirror centre.
Lip boundary detection techniques using color and depth information
NASA Astrophysics Data System (ADS)
Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek
2002-01-01
This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.
LROC Targeted Observations for the Next Generation of Scientific Exploration
NASA Astrophysics Data System (ADS)
Jolliff, B. L.
2015-12-01
Imaging of the Moon at high spatial resolution (0.5 to 2 mpp) by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC) plus topographic data derived from LROC NAC and WAC (Wide Angle Camera) and LOLA (Lunar Orbiting Laser Altimeter), coupled with recently obtained hyperspectral NIR and thermal data, permit studies of composition, mineralogy, and geologic context at essentially an outcrop scale. Such studies pave the way for future landed and sample return missions for high science priority targets. Among such targets are (1) the youngest volcanic rocks on the Moon, including mare basalts formed as recently as ~1 Ga, and irregular mare patches (IMPs) that appear to be even younger [1]; (2) volcanic rocks and complexes with compositions more silica-rich than mare basalts [2-4]; (3) differentiated impact-melt deposits [5,6], ancient volcanics, and compositional anomalies within the South Pole-Aitken basin; (4) exposures of recently discovered key crustal rock types in uplifted structures such as essentially pure anorthosite [7] and spinel-rich rocks [8]; and (5) frozen volatile-element-rich deposits in polar areas [9]. Important data sets include feature sequences of paired NAC images obtained under similar illumination conditions, NAC geometric stereo, from which high-resolution DTMs can be made, and photometric sequences useful for assessing composition in areas of mature cover soils. Examples of each of these target types will be discussed in context of potential future missions. References: [1] Braden et al. (2014) Nat. Geo. 7, 787-791. [2] Glotch et al. (2010) Science, 329, 1510-1513. [3] Greenhagen et al. (2010) Science, 329, 1507-1509. [4] Jolliff et al. (2011) Nat. Geo. 4, 566-571. [5] Vaughan et al (2013) PSS 91, 101-106. [6] Hurwitz and Kring (2014) J. Geophys. Res. 119, 1110-1133 [7] Ohtake et al. (2009) Nature, 461, 236-241 [8] Pieters et al. (2014) Am. Min. 99, 1893-1910. [9] Colaprete et al. (2010) Science 330, 463-468.
Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system
NASA Astrophysics Data System (ADS)
Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping
2015-05-01
Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.
Situational awareness for unmanned ground vehicles in semi-structured environments
NASA Astrophysics Data System (ADS)
Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.
2002-07-01
Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.
Phoenix Lander on Mars with Surrounding Terrain, Vertical Projection
NASA Technical Reports Server (NTRS)
2008-01-01
This view is a vertical projection that combines more than 500 exposures taken by the Surface Stereo Imager camera on NASA's Mars Phoenix Lander and projects them as if looking down from above. The black circle on the spacecraft is where the camera itself is mounted on the lander, out of view in images taken by the camera. North is toward the top of the image. The height of the lander's meteorology mast, extending toward the southwest, appears exaggerated because that mast is taller than the camera mast. This view in approximately true color covers an area about 30 meters by 30 meters (about 100 feet by 100 feet). The landing site is at 68.22 degrees north latitude, 234.25 degrees east longitude on Mars. The ground surface around the lander has polygonal patterning similar to patterns in permafrost areas on Earth. This view comprises more than 100 different Stereo Surface Imager pointings, with images taken through three different filters at each pointing. The images were taken throughout the period from the 13th Martian day, or sol, after landing to the 47th sol (June 5 through July 12, 2008). The lander's Robotic Arm is cut off in this mosaic view because component images were taken when the arm was out of the frame. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.
NASA Astrophysics Data System (ADS)
Luo, Xiongbiao; Jayarathne, Uditha L.; McLeod, A. Jonathan; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.
2016-03-01
This paper studies uncalibrated stereo rectification and stable disparity range determination for surgical scene three-dimensional (3-D) reconstruction. Stereoscopic endoscope calibration sometimes is not available and also increases the complexity of the operating-room environment. Stereo from uncalibrated endoscopic cameras is an alternative to reconstruct the surgical field visualized by binocular endoscopes within the body. Uncalibrated rectification is usually performed on the basis of a number of matched feature points (semi-dense correspondence) between the left and the right images of stereo pairs. After uncalibrated rectification, the corresponding feature points can be used to determine the proper disparity range that helps to improve the reconstruction accuracy and reduce the computational time of disparity map estimation. Therefore, the corresponding or matching accuracy and robustness of feature point descriptors is important to surgical field 3-D reconstruction. This work compares four feature detectors: (1) scale invariant feature transform (SIFT), (2) speeded up robust features (SURF), (3) affine scale invariant feature transform (ASIFT), and (4) gauge speeded up robust features (GSURF) with applications to uncalibrated rectification and stable disparity range determination. We performed our experiments on surgical endoscopic video images that were collected during robotic prostatectomy. The experimental results demonstrate that ASIFT outperforms other feature detectors in the uncalibrated stereo rectification and also provides a stable stable disparity range for surgical scene reconstruction.
High-resolution Ceres LAMO atlas derived from Dawn FC images
NASA Astrophysics Data System (ADS)
Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C.
2016-12-01
Introduction: NASA's Dawn spacecraft has been orbiting the dwarf planet Ceres since December 2015 in LAMO (High Altitude Mapping Orbit) with an altitude of about 400 km to characterize for instance the geology, topography, and shape of Ceres. One of the major goals of this mission phase is the global high-resolution mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera took until the time of writing about 27,500 clear filter images in LAMO with a resolution of about 30 m/pixel and dif-ferent viewing angles and different illumination condi-tions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. A high-resolution shape model was provided by stereo processing of the HAMO dataset, orbit and attitude data are available as reconstructed SPICE data. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself was done onto a reference sphere of Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:250,000 and will consist of 62 tiles that conforms to the quadrangle schema for Venus at 1:5,000,000. A map scale of 1:250,000 is a compro-mise between the very high resolution in LAMO and a proper map sheet size of the single tiles. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters and to use names of agricultural festivals of the world for other geological features. This proposal was ac-cepted by the IAU and the team proposed 92 names for geological features to the IAU based on the LAMO mosaic. These feature names will be applied to the map tiles.
NASA Astrophysics Data System (ADS)
Harrison, R. A.; Davies, J. A.; Barnes, D.; Byrne, J. P.; Perry, C. H.; Bothmer, V.; Eastwood, J. P.; Gallagher, P. T.; Kilpua, E. K. J.; Möstl, C.; Rodriguez, L.; Rouillard, A. P.; Odstrčil, D.
2018-05-01
We present a statistical analysis of coronal mass ejections (CMEs) imaged by the Heliospheric Imager (HI) instruments on board NASA's twin-spacecraft STEREO mission between April 2007 and August 2017 for STEREO-A and between April 2007 and September 2014 for STEREO-B. The analysis exploits a catalogue that was generated within the FP7 HELCATS project. Here, we focus on the observational characteristics of CMEs imaged in the heliosphere by the inner (HI-1) cameras, while following papers will present analyses of CME propagation through the entire HI fields of view. More specifically, in this paper we present distributions of the basic observational parameters - namely occurrence frequency, central position angle (PA) and PA span - derived from nearly 2000 detections of CMEs in the heliosphere by HI-1 on STEREO-A or STEREO-B from the minimum between Solar Cycles 23 and 24 to the maximum of Cycle 24; STEREO-A analysis includes a further 158 CME detections from the descending phase of Cycle 24, by which time communication with STEREO-B had been lost. We compare heliospheric CME characteristics with properties of CMEs observed at coronal altitudes, and with sunspot number. As expected, heliospheric CME rates correlate with sunspot number, and are not inconsistent with coronal rates once instrumental factors/differences in cataloguing philosophy are considered. As well as being more abundant, heliospheric CMEs, like their coronal counterparts, tend to be wider during solar maximum. Our results confirm previous coronagraph analyses suggesting that CME launch sites do not simply migrate to higher latitudes with increasing solar activity. At solar minimum, CMEs tend to be launched from equatorial latitudes, while at maximum, CMEs appear to be launched over a much wider latitude range; this has implications for understanding the CME/solar source association. Our analysis provides some supporting evidence for the systematic dragging of CMEs to lower latitude as they propagate outwards.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Desjardins, M.; Shenk, W. E.
1979-01-01
Simultaneous Geosynchronous Operational Environmental Satellite (GOES) 1 km resolution visible image pairs can provide quantitative three dimensional measurements of clouds. These data have great potential for severe storms research and as a basic parameter measurement source for other areas of meteorology (e.g. climate). These stereo cloud height measurements are not subject to the errors and ambiguities caused by unknown cloud emissivity and temperature profiles that are associated with infrared techniques. This effort describes the display and measurement of stereo data using digital processing techniques.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
Bell, James F.; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; Madsen, M.B.; Hardgrove, C.; Ravine, M.A.; Jensen, E.; Harker, D.; Anderson, Ryan; Herkenhoff, Kenneth E.; Morris, R.V.; Cisneros, E.; Deen, R.G.
2017-01-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted ~2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) “true color” images, multispectral images in nine additional bands spanning ~400–1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays
NASA Astrophysics Data System (ADS)
Baek, Sangwook; Lee, Chulhee
2015-03-01
In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.
Ubiquitous Stereo Vision for Controlling Safety on Platforms in Railroad Station
NASA Astrophysics Data System (ADS)
Yoda, Ikushi; Hosotani, Daisuke; Sakaue, Katushiko
Dozens of people are killed every year when they fall off of train platforms, making this an urgent issue to be addressed by the railroads, especially in the major cities. This concern prompted the present work that is now in progress to develop a Ubiquitous Stereo Vision based system for safety management at the edge of rail station platforms. In this approach, a series of stereo cameras are installed in a row on the ceiling that are pointed downward at the edge of the platform to monitor the disposition of people waiting for the train. The purpose of the system is to determine automatically and in real-time whether anyone or anything is in the danger zone at the very edge of the platform, whether anyone has actually fallen off the platform, or whether there is any sign of these things happening. The system could be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble.
NASA Astrophysics Data System (ADS)
Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko
Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.
Structured light optical microscopy for three-dimensional reconstruction of technical surfaces
NASA Astrophysics Data System (ADS)
Kettel, Johannes; Reinecke, Holger; Müller, Claas
2016-04-01
In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.
2017-07-14
On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Pluto's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. Examples of large-scale topographic features on Pluto include the vast expanse of very flat, low-elevation nitrogen ice plains of Sputnik Planitia ("P") -- note that all feature names in the Pluto system are informal -- and, on the eastern edge of the encounter hemisphere, the aligned, high-elevation ridges of Tartarus Dorsa ("T") that host the enigmatic bladed terrain, mountains, possible cryovolcanos, canyons, craters and more. https://photojournal.jpl.nasa.gov/catalog/PIA21861
WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro
2017-04-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Location and Geologic Setting for the Three U.S. Mars Landers
NASA Technical Reports Server (NTRS)
Parker, T. J.; Kirk, R. L.
1999-01-01
Super resolution of the horizon at both Viking landing sites has revealed "new" features we use for triangulation, similar to the approach used during the Mars Pathfinder Mission. We propose alternative landing site locations for both landers for which we believe the confidence is very high. Super resolution of VL-1 images also reveals some of the drift material at the site to consist of gravel-size deposits. Since our proposed location for VL-2 is NOT on the Mie ejecta blanket, the blocky surface around the lander may represent the meter-scale texture of "smooth palins" in the region. The Viking Lander panchromatic images typically offer more repeat coverage than does the IMP on Mars Pathfinder, due to the longer duration of these landed missions. Sub-pixel offsets, necessary for super resolution to work, appear to be attributable to thermal effects on the lander and settling of the lander over time. Due to the greater repeat coverage (particularly in the near and mid-fields) and all-panchromatic images, the gain in resolution by super resolution processing is better for Viking than it is with most IMP image sequences. This enhances the study of textural details near the lander and enables the identification rock and surface textures at greater distances from the lander. Discernment of stereo in super resolution im-ages is possible to great distances from the lander, but is limited by the non-rotating baseline between the two cameras and the shorter height of the cameras above the ground compared to IMP. With super resolution, details of horizon features, such as blockiness and crater rim shapes, may be better correlated with Orbiter images. A number of horizon features - craters and ridges - were identified at VL-1 during the misison, and a few hils and subtle ridges were identified at VL-2. We have added a few "new" horizon features for triangulation at the VL-2 landing site in Utopia Planitia. These features were used for independent triangulation with features visible in Viking Orbiter and MGS MOC images, though the actual location of VL-1 lies in a data dropout in the MOC image of the area. Additional information is contained in the original extended abstract.
The research of binocular vision ranging system based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Shikuan; Yang, Xu
2017-10-01
Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
NASA Astrophysics Data System (ADS)
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
NASA Astrophysics Data System (ADS)
Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.
2016-12-01
We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2007-02-01
A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.
Mapping snow depth from stereo satellite imagery
NASA Astrophysics Data System (ADS)
Gascoin, S.; Marti, R.; Berthier, E.; Houet, T.; de Pinel, M.; Laffly, D.
2016-12-01
To date, there is no definitive approach to map snow depth in mountainous areas from spaceborne sensors. Here, we examine the potential of very-high-resolution (VHR) optical stereo satellites to this purpose. Two triplets of 0.70 m resolution images were acquired by the Pléiades satellite over an open alpine catchment (14.5 km²) under snow-free and snow-covered conditions. The open-source software Ame's Stereo Pipeline (ASP) was used to match the stereo pairs without ground control points to generate raw photogrammetric clouds and to convert them into high-resolution digital elevation models (DEMs) at 1, 2, and 4 m resolutions. The DEM differences (dDEMs) were computed after 3-D coregistration, including a correction of a -0.48 m vertical bias. The bias-corrected dDEM maps were compared to 451 snow-probe measurements. The results show a decimetric accuracy and precision in the Pléiades-derived snow depths. The median of the residuals is -0.16 m, with a standard deviation (SD) of 0.58 m at a pixel size of 2 m. We compared the 2 m Pléiades dDEM to a 2 m dDEM that was based on a winged unmanned aircraft vehicle (UAV) photogrammetric survey that was performed on the same winter date over a portion of the catchment (3.1 km²). The UAV-derived snow depth map exhibits the same patterns as the Pléiades-derived snow map, with a median of -0.11 m and a SD of 0.62 m when compared to the snow-probe measurements. The Pléiades images benefit from a very broad radiometric range (12 bits), allowing a high correlation success rate over the snow-covered areas. This study demonstrates the value of VHR stereo satellite imagery to map snow depth in remote mountainous areas even when no field data are available. Based on this method we have initiated a multi-year survey of the peak snow depth in the Bassiès catchment.
Hippo in Super Resolution from Super Panorama
1998-07-03
This view of the "Hippo," 25 meters to the west of the lander, was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. These composites consist of more than 15 frames per eye (because multiple sequences covered the same area), taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01421
Dark, Recurring Streaks on Walls of Garni Crater
2015-09-28
Dark narrow streaks, called "recurring slope lineae," emanate from the walls of Garni Crater on Mars, in this view constructed from observations by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The dark streaks here are up to few hundred yards, or meters, long. They are hypothesized to be formed by flow of briny liquid water on Mars. The image was produced by first creating a 3-D computer model (a digital terrain map) of the area based on stereo information from two HiRISE observations, and then draping an image over the land-shape model. The vertical dimension is exaggerated by a factor of 1.5 compared to horizontal dimensions. The draped image is a red waveband (monochrome) product from HiRISE observation ESP_031059_1685, taken on March 12, 2013 at 11.5 degrees south latitude, 290.3 degrees east longitude. Other image products from this observation are at http://hirise.lpl.arizona.edu/ESP_031059_1685. http://photojournal.jpl.nasa.gov/catalog/PIA19917
Lunar and Planetary Science XXXVI, Part 10
NASA Technical Reports Server (NTRS)
2005-01-01
The Problem of Incomplete Mixing of Interstellar Components in the Solar Nebula: Very High Precision Isotopic Measurements with Isoprobes P and T. Finally: Presolar Graphite Grains Identified in Orgueil. Basaltic Ring Structures as an Analog for Ring Features in Athabasca Valles, Mars. Experimental Studies of the Water Sorption Properties of Mars-Relevant Porous Minerals and Sulfates. Silicon Isotope Ratio Variations in CAI Evaporation Residues Measured by Laser Ablation Multicollector ICPMS. Crater Count Chronology and Timing of Ridged Plains Emplacement at Schiaparelli Basin, Mars. Martian Valley Networks and Associated Fluvial Features as Seen by the Mars Express High Resolution Stereo Camera (HRSC). Fast-Turnoff Transient Electromagnetic (TEM) Field Study at the Mars Analog Site of Rio Tinto, Spain. Time Domain Electromagnetics for Mapping Mineralized and Deep Groundwater in Mars Analog Environments. Mineralogical and Seismological Models of the Lunar Mantle. Photometric Observations of Soils and Rocks at the Mars Exploration Rover Landing Sites. Thermal Infrared Spectral Deconvolution of Experimentally Shocked Basaltic Rocks Using Experimentally Shocked Plagioclase Endmembers.
2018-01-01
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267
Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.
2013-01-01
In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.
Stereo vision techniques for telescience
NASA Astrophysics Data System (ADS)
Hewett, S.
1990-02-01
The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.
Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data
NASA Astrophysics Data System (ADS)
d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter
2010-12-01
IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.
SIMBIO-SYS for BepiColombo: status and issues.
NASA Astrophysics Data System (ADS)
Flamini, E.; Capaccioni, F.; Cremonese, G.; Palumbo, P.; Formaro, R.; Mugnuolo, R.; Debei, S.; Ficai Veltroni, I.; Dami, M.; Tommasi, L.; SIMBIO-SYS Team
The SIMBIO-SYS (Spectrometer and Imaging for MPO BepiColombo Integrated Observatory SYStem) is a complex instrument suite part of the scientific payload of the Mercury Planetary Orbiter for the BepiColombo mission, the last of the cornerstone missions of the European Space Agency (ESA) Horizon+ science program. The BepiColombo mission is compose by two scientific satellites on, Mercury Magnetic Orbiter-MMO, realized by the Japanese Space Agency JAXA, devoted to the study of the planet environment and the other, the Mercury Planetary Orbiter realized by ESA, devoted to the detailed study of the Hermean surface and interior. The SIMBIOSYS instrument will provide all the science imaging capability of the Bepicolombo MPO spacecraft. It consists of three channels: the STereo imaging Channel (STC), with broad spectral band in the 400-950 nm range and medium spatial resolution (up to 50 m/px), that will provide Digital Terrain Model of the entire surface of the planet with an accuracy better than 80 m; the High Resolution Imaging Channel HRIC), with broad spectral bands in the 400-900 nm range and high spatial resolution (up to 5 m/px), that will provide high resolution images of about 20% of the surface, and the Visible and near-Infrared Hyperspectral Imaging channel (VIHI), with high spectral resolution (up to 6 nm) in the 400-2000 nm range and spatial resolution up to 100 m/px, it will provide the global covergae at 400 m/px with the spectral information. SIMBIO-SYS will provide unprecedented high-resolution images, the Digital Terrain Model of the entire surface, and the surface composition in wide spectral range, at resolutions and coverage higher than the MESSENGER mission with a full co-alignememt of the three channels. The main scientific objectives can be summarized as follows: Definition of the impact flux in the inner Solar System: based on the impact crater population records Understanding of the accretional model of an end member of the Solar System: based on the type and distribution of mineral species Reconstruction of the surface geology and stratigraphic history: based on the combination of stereo and high- resolution imaging along with compositional information coming from the spectrometer Relative surface age by impact craters population density and distribution: based on the global imaging including the high-resolution mode Surface degradation processes and global resurfacing: derived from the erosional status of the impact crater and ejecta Identification of volcanic landforms and style: using the morphological and compositional information Crustal dynamics and mechanical properties of the lithosphere: based on the identification and classification of tectonic structures from visible images and detailed DTM Surface composition and crustal differentiation: based on the identification and distribution of mineral species as seen by the NIR hyperspectral imager Soil maturity and alteration processes: based on the measure of the spectral slope derived by the hyperspectral imager and the colour capabilities of the stereo camera Determination of moment of inertia of the planet: the high-resolution imaging channel as landmark pairs of surface features that can be observed on the periside as support for the libration experiment Surface-Atmosphere interaction processes and origin of the exosphere: knowledge of the surface composition is also crucial to unambiguously identify the source minerals for each of the constituents of the Mercury.s exosphere The instrument has been realized by Selex-ES under the contract and management of the Italian Space Agency (ASI) that have signed an MoU with CNES for the development of VIHI Proximity Electronics, the Main Electronics, and the instrument final calibration . All the realization and calibration has been carried on under the scientific supervision of the SIMBIO-SYS science team SIMBIOSYS has been delivered to ESA on April 2015 for the final integration on the BepiColombo MPO spacecraft.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Selection of the InSight landing site
Golombek, M.; Kipp, D.; Warner, N.; Daubar, Ingrid J.; Fergason, Robin L.; Kirk, Randolph L.; Beyer, R.; Huertas, A.; Piqueux, Sylvain; Putzig, N.E.; Campbell, B.A.; Morgan, G. A.; Charalambous, C.; Pike, W. T.; Gwinner, K.; Calef, F.; Kass, D.; Mischna, M A; Ashley, J.; Bloom, C.; Wigton, N.; Hare, T.; Schwartz, C.; Gengl, H.; Redmond, L.; Trautman, M.; Sweeney, J.; Grima, C.; Smith, I. B.; Sklyanskiy, E.; Lisano, M.; Benardini, J.; Smrekar, S.E.; Lognonne, P.; Banerdt, W. B.
2017-01-01
The selection of the Discovery Program InSight landing site took over four years from initial identification of possible areas that met engineering constraints, to downselection via targeted data from orbiters (especially Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE) images), to selection and certification via sophisticated entry, descent and landing (EDL) simulations. Constraints on elevation (≤−2.5 km">≤−2.5 km≤−2.5 km for sufficient atmosphere to slow the lander), latitude (initially 15°S–5°N and later 3°N–5°N for solar power and thermal management of the spacecraft), ellipse size (130 km by 27 km from ballistic entry and descent), and a load bearing surface without thick deposits of dust, severely limited acceptable areas to western Elysium Planitia. Within this area, 16 prospective ellipses were identified, which lie ∼600 km north of the Mars Science Laboratory (MSL) rover. Mapping of terrains in rapidly acquired CTX images identified especially benign smooth terrain and led to the downselection to four northern ellipses. Acquisition of nearly continuous HiRISE, additional Thermal Emission Imaging System (THEMIS), and High Resolution Stereo Camera (HRSC) images, along with radar data confirmed that ellipse E9 met all landing site constraints: with slopes <15° at 84 m and 2 m length scales for radar tracking and touchdown stability, low rock abundance (<10 %) to avoid impact and spacecraft tip over, instrument deployment constraints, which included identical slope and rock abundance constraints, a radar reflective and load bearing surface, and a fragmented regolith ∼5 m thick for full penetration of the heat flow probe. Unlike other Mars landers, science objectives did not directly influence landing site selection.