Science.gov

Sample records for 3d ladar imagery

  1. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  2. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  3. Fusion of multisensor passive and active 3D imagery

    NASA Astrophysics Data System (ADS)

    Fay, David A.; Verly, Jacques G.; Braun, Michael I.; Frost, Carl E.; Racamato, Joseph P.; Waxman, Allen M.

    2001-08-01

    We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.

  4. Threat object identification performance for LADAR imagery: comparison of 2-dimensional versus 3-dimensional imagery

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Matthew A.; Driggers, Ronald G.; Redman, Brian; Krapels, Keith A.

    2006-05-01

    This research was conducted to determine the change in human observer range performance when LADAR imagery is presented in stereo 3D vice 2D. It compares the ability of observers to correctly identify twelve common threatening and non-threatening single-handed objects (e.g. a pistol versus a cell phone). Images were collected with the Army Research Lab/Office of Naval Research (ARL/ONR) Short Wave Infrared (SWIR) Imaging LADAR. A perception experiment, utilizing both military and civilian observers, presented subjects with images of varying angular resolutions. The results of this experiment were used to create identification performance curves for the 2D and 3D imagery, which show probability of identification as a function of range. Analysis of the results indicates that there is no evidence of a statistically significant difference in performance between 2D and 3D imagery.

  5. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  6. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  7. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  8. Characterization measurements of ASC FLASH 3D ladar

    NASA Astrophysics Data System (ADS)

    Larsson, Håkan; Gustafsson, Frank; Johnson, Bruce; Richmond, Richard; Armstrong, Ernest

    2009-09-01

    As a part of the project agreement between the Swedish Defence Research Agency (FOI) and the United States of American's Air Force Research Laboratory (AFRL), a joint field trial was performed in Sweden during two weeks in January 2009. The main purpose for this trial was to characterize AFRL's latest version of the ASC (Advanced Scientific Concepts [1]) FLASH 3D LADAR sensor. The measurements were performed essentially in FOI´s optical hall whose 100 m indoor range offers measurements under controlled conditions minimizing effects such as atmospheric turbulence. Data were also acquired outdoor in both forest and urban scenarios, using vehicles and humans as targets, with the purpose of acquiring data from more dynamic platforms to assist in further algorithm development. This paper shows examples of the acquired data and presents initial results.

  9. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  10. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  11. MBE based HgCdTe APDs and 3D LADAR sensors

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Asbrock, Jim; Bailey, Steven; Baley, Diane; Chapman, George; Crawford, Gina; Drafahl, Betsy; Herrin, Eileen; Kvaas, Robert; McKeag, William; Randall, Valerie; De Lyon, Terry; Hunter, Andy; Jensen, John; Roberts, Tom; Trotta, Patrick; Cook, T. Dean

    2007-04-01

    Raytheon is developing HgCdTe APD arrays and sensor chip assemblies (SCAs) for scanning and staring LADAR systems. The nonlinear characteristics of APDs operating in moderate gain mode place severe requirements on layer thickness and doping uniformity as well as defect density. MBE based HgCdTe APD arrays, engineered for high performance, meet the stringent requirements of low defects, excellent uniformity and reproducibility. In situ controls for alloy composition and substrate temperature have been implemented at HRL, LLC and Raytheon Vision Systems and enable consistent run to run results. The novel epitaxial designed using separate absorption-multiplication (SAM) architectures enables the realization of the unique advantages of HgCdTe including: tunable wavelength, low-noise, high-fill factor, low-crosstalk, and ambient operation. Focal planes built by integrating MBE detectors arrays processed in a 2 x 128 format have been integrated with 2 x 128 scanning ROIC designed. The ROIC reports both range and intensity and can detect multiple laser returns with each pixel autonomously reporting the return. FPAs show exceptionally good bias uniformity <1% at an average gain of 10. Recent breakthrough in device design has resulted in APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidth. 3D LADAR sensors utilizing these FPAs have been integrated and demonstrated both at Raytheon Missile Systems and Naval Air Warfare Center Weapons Division at China Lake. Excellent spatial and range resolution has been achieved with 3D imagery demonstrated both at short range and long range. Ongoing development under an Air Force Sponsored MANTECH program of high performance HgCdTe MBE APDs grown on large silicon wafers promise significant FPA cost reduction both by increasing the number of arrays on a given wafer and enabling automated processing.

  12. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  13. Use of laser radar imagery in optical pattern recognition: the Optical Processor Enhanced Ladar (OPEL) Program

    NASA Astrophysics Data System (ADS)

    Goldstein, Dennis H.; Mills, Stuart A.; Dydyk, Robert B.

    1998-03-01

    The Optical Processor Enhanced Ladar (OPEL) program is designed to evaluate the capabilities of a seeker obtained by integrating two state-of-the-art technologies, laser radar, or ladar, and optical correlation. The program is a thirty-two month effort to build, optimize, and test a breadboard seeker system (the OPEL System) that incorporates these two promising technologies. Laser radars produce both range and intensity image information. Use of this information in an optical correlator is described. A correlator with binary phase input and ternary amplitude and phase filter capability is assumed. Laser radar imagery was collected on five targets over 360 degrees of azimuth from 3 elevation angles. This imagery was then processed to provide training sets in preparation for filter construction. This paper reviews the ladar and optical correlator technologies used, outlines the OPEL program, and describes the OPEL system.

  14. Multi-static networked 3D ladar for surveillance and access control

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ogirala, S. S. R.; Hu, B.; Le, Han Q.

    2007-04-01

    A theoretical design and simulation of a 3D ladar system concept for surveillance, intrusion detection, and access control is described. It is a non-conventional system architecture that consists of: i) multi-static configuration with an arbitrarily scalable number of transmitters (Tx's) and receivers (Rx's) that form an optical wireless code-division-multiple-access (CDMA) network, and ii) flexible system architecture with modular plug-and-play components that can be deployed for any facility with arbitrary topology. Affordability is a driving consideration; and a key feature for low cost is an asymmetric use of many inexpensive Rx's in conjunction with fewer Tx's, which are generally more expensive. The Rx's are spatially distributed close to the surveyed area for large coverage, and capable of receiving signals from multiple Tx's with moderate laser power. The system produces sensing information that scales as NxM, where N, M are the number of Tx's and Rx's, as opposed to linear scaling ~N in non-network system. Also, for target positioning, besides laser pointing direction and time-of-flight, the algorithm includes multiple point-of-view image fusion and triangulation for enhanced accuracy, which is not applicable to non-networked monostatic ladars. Simulation and scaled model experiments on some aspects of this concept are discussed.

  15. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  16. 3D imaging LADAR with linear array devices: laser, detector and ROIC

    NASA Astrophysics Data System (ADS)

    Kameyama, Shumpei; Imaki, Masaharu; Tamagawa, Yasuhisa; Akino, Yosuke; Hirai, Akihito; Ishimura, Eitaro; Hirano, Yoshihito

    2009-07-01

    This paper introduces the recent development of 3D imaging LADAR (LAser Detection And Ranging) in Mitsubishi Electric Corporation. The system consists of in-house-made key devices which are linear array: the laser, the detector and the ROIC (Read-Out Integrated Circuit). The laser transmitter is the high power and compact planar waveguide array laser at the wavelength of 1.5 micron. The detector array consists of the low excess noise Avalanche Photo Diode (APD) using the InAlAs multiplication layer. The analog ROIC array, which is fabricated in the SiGe- BiCMOS process, includes the Trans-Impedance Amplifiers (TIA), the peak intensity detectors, the Time-Of-Flight (TOF) detectors, and the multiplexers for read-out. This device has the feature in its detection ability for the small signal by optimizing the peak intensity detection circuit. By combining these devices with the one dimensional fast scanner, the real-time 3D range image can be obtained. After the explanations about the key devices, some 3D imaging results are demonstrated using the single element key devices. The imaging using the developed array devices is planned in the near future.

  17. Maritime target identification in flash-ladar imagery

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Hammer, Marcus

    2012-05-01

    The paper presents new techniques and processing results for automatic segmentation, shape classification, generic pose estimation, and model-based identification of naval vessels in laser radar imagery. The special characteristics of focal plane array laser radar systems such as multiple reflections and intensity-dependent range measurements are incorporated into the algorithms. The proposed 3D model matching technique is probabilistic, based on the range error distribution, correspondence errors, the detection probability of potentially visible model points and false alarm errors. The match algorithm is robust against incomplete and inaccurate models, each model having been generated semi-automatically from a single range image. A classification accuracy of about 96% was attained, using a maritime database with over 8000 flash laser radar images of 146 ships at various ranges and orientations together with a model library of 46 vessels. Applications include military maritime reconnaissance, coastal surveillance, harbor security and anti-piracy operations.

  18. Verification of a 3-D terrain mapping LADAR on various materials in different environments

    NASA Astrophysics Data System (ADS)

    Edwards, Lulu; Brown, E. Ray; Jersey, Sarah R.

    2010-01-01

    A field validation of a laser detection and ranging (LADAR) system was conducted by the U.S. Army Engineer Research and Development Center (ERDC), Vicksburg, Mississippi. The LADAR system, a commercial-off-the-shelf (COTS) LADAR system custom-modified by Autonomous Solutions, Inc. (ASI), was tested for accuracy in measuring terrain geometry. A verification method was developed to compare the LADAR dataset to a ground-truth dataset that consisted of total station measurements. Three control points were measured and used to align the two datasets. The influence of slopes, surface materials, light, fog, and dust were investigated. The study revealed that slopes only affected measurements when the terrain was obscured from the LADAR system, and ambient light conditions did not significantly affect the LADAR measurements. The accuracy of the LADAR system, which was equipped with fog correction, was adversely affected by particles suspended in air, such as fog or dust. Also, in some cases the material type had an effect on the accuracy of the LADAR measurements.

  19. Human and tree classification based on a model using 3D ladar in a GPS-denied environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2013-05-01

    This study explained a method to classify humans and trees by extraction their geometric and statistical features in data obtained from 3D LADAR. In a wooded GPS-denied environment, it is difficult to identify the location of unmanned ground vehicles and it is also difficult to properly recognize the environment in which these vehicles move. In this study, using the point cloud data obtained via 3D LADAR, a method to extract the features of humans, trees, and other objects within an environment was implemented and verified through the processes of segmentation, feature extraction, and classification. First, for the segmentation, the radially bounded nearest neighbor method was applied. Second, for the feature extraction, each segmented object was divided into three parts, and then their geometrical and statistical features were extracted. A human was divided into three parts: the head, trunk and legs. A tree was also divided into three parts: the top, middle, and bottom. The geometric features were the variance of the x-y data for the center of each part in an object, using the distance between the two central points for each part, using K-mean clustering. The statistical features were the variance of each of the parts. In this study, three, six and six features of data were extracted, respectively, resulting in a total of 15 features. Finally, after training the extracted data via an artificial network, new data were classified. This study showed the results of an experiment that applied an algorithm proposed with a vehicle equipped with 3D LADAR in a thickly forested area, which is a GPS-denied environment. A total of 5,158 segments were obtained and the classification rates for human and trees were 82.9% and 87.4%, respectively.

  20. Flattop beam illumination for 3D imaging ladar with simple optical devices in the wide distance range

    NASA Astrophysics Data System (ADS)

    Tsuji, Hidenobu; Nakano, Takayuki; Matsumoto, Yoshihiro; Kameyama, Shumpei

    2016-04-01

    We have developed an illumination optical system for 3D imaging ladar (laser detection and ranging) which forms flattop beam shape by transformation of the Gaussian beam in the wide distance range. The illumination is achieved by beam division and recombination using a prism and a negative powered lens. The optimum condition of the transformation by the optical system is derived. It is confirmed that the flattop distribution can be formed in the wide range of the propagation distance from 1 to 1000 m. The experimental result with the prototype is in good agreement with the calculation result.

  1. Long-range imaging ladar flight test

    NASA Astrophysics Data System (ADS)

    Brandt, James; Steiner, Todd D.; Mandeville, William J.; Dinndorf, Kenneth M.; Krasutsky, Nick J.; Minor, John L.

    1995-06-01

    Wright Laboratory and Loral Vought Systems (LVS) have been involved for the last nine years in the research and development of high power diode pumped solid state lasers for medium to long range laser radar (LADAR) seekers for tactical air-to-ground munitions. LVS provided the lead in three key LADAR programs at Wright Lab; the Submunition Guidance Program (Subguide), the Low Cost Anti-Armor Submunition Program (LOCAAS) and the Diode Laser and Detector Array Development Program (3-D). This paper discusses recent advances through the 3-D program that provide the opportunity to obtain three dimensional laser radar imagery in captive flight at a range of 5 km.

  2. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  3. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  4. ATA algorithm suite for co-boresighted pmmw and ladar imagery

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Snorrason, Magnus; Ablavsky, Vitaly; Amphay, Sengvieng A.

    2001-08-01

    The need for air-to-ground missiles with day/night, adverse weather and pinpoint accuracy Autonomous Target Acquisition (ATA) seekers is essential for today's modern warfare scenarios. Passive millimeter wave (PMMW) sensors have the ability to see through clouds; in fact they tend to show metallic objects in high contrast regardless of weather conditions. However, their resolution is very low when compared with other ATA sensor such as laser radar (LADAR). We present an ATA algorithm suite that combines the superior target detection potential of PMMW with the high-quality segmentation and recognition abilities of LADAR. Preliminary detection and segmentation results are presented for a set of image-pairs of military vehicles that were collected for this project using an 89 Ghz, 18 inch aperture PMMW sensor from TRW and a 1.06 (mu) high-resolution LADAR.

  5. Anti-ship missile tracking with a chirped amplitude modulation ladar

    NASA Astrophysics Data System (ADS)

    Redman, Brian C.; Stann, Barry L.; Ruff, William C.; Giza, Mark M.; Aliberti, Keith; Lawler, William B.

    2004-09-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming anti-ship missiles at long ranges. Since IRST systems cannot measure range and velocity, they have difficulty distinguishing missiles from slowly moving false targets and clutter. ARL is developing a ladar based on its patented chirped amplitude modulation (AM) technique to provide unambiguous range and velocity measurements of targets handed over to it by the IRST. Using the ladar's range and velocity data, false alarms and clutter objects will be distinguished from valid targets. If the target is valid, it's angular location, range, and velocity, will be used to update the target track until remediation has been effected. By using an array receiver, ARL's ladar can also provide 3D imagery of potential threats in support of force protection. The ladar development program will be accomplished in two phases. In Phase I, currently in progress, ARL is designing and building a breadboard ladar test system for proof-of-principle static platform field tests. In Phase II, ARL will build a brassboard ladar test system that will meet operational goals in shipboard testing against realistic targets. The principles of operation for the chirped AM ladar for range and velocity measurements, the ladar performance model, and the top-level design for the Phase I breadboard are presented in this paper.

  6. Advances in HgCdTe APDs and LADAR Receivers

    NASA Technical Reports Server (NTRS)

    Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin

    2010-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.

  7. Imaging through obscurants with a heterodyne detection-based ladar system

    NASA Astrophysics Data System (ADS)

    Reibel, Randy R.; Roos, Peter A.; Kaylor, Brant M.; Berg, Trenton J.; Curry, James R.

    2014-06-01

    Bridger Photonics has been researching and developing a ladar system based on heterodyne detection for imaging through brownout and other DVEs. There are several advantages that an FMCW ladar system provides compared to direct detect pulsed time-of-flight systems including: 1) Higher average powers, 2) Single photon sensitive while remaining tolerant to strong return signals, 3) Doppler sensitivity for clutter removal, and 4) More flexible system for sensing during various stages of flight. In this paper, we provide a review of our sensor, discuss lessons learned during various DVE tests, and show our latest 3D imagery.

  8. Anti-ship missile tracking with a chirped AM ladar - Update: design, model predictions, and experimental results

    NASA Astrophysics Data System (ADS)

    Redman, Brian; Ruff, William; Stann, Barry; Giza, Mark; Lawler, William; Dammann, John; Potter, William

    2005-05-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming, anti-ship missiles at long ranges. Since IRST systems cannot measure range and line-of-sight (LOS) velocity, they have difficulty distinguishing missiles from false targets and clutter. In a joint Army-Navy program, the Army Research Laboratory (ARL) is developing a ladar based on the chirped amplitude modulation (AM) technique to provide range and velocity measurements of potential targets handed-over by the distributed aperture system - IRST (DAS-IRST) being developed by the Naval Research Laboratory (NRL) and sponsored by the Office of Naval Research (ONR). Using the ladar's range and velocity data, false alarms and clutter will be eliminated, and valid missile targets' tracks will be updated. By using an array receiver, ARL's ladar will also provide 3D imagery of potential threats for force protection/situational awareness. The concept of operation, the Phase I breadboard ladar design and performance model results, and the Phase I breadboard ladar development program were presented in paper 5413-16 at last year's symposium. This paper will present updated design and performance model results, as well as recent laboratory and field test results for the Phase I breadboard ladar. Implications of the Phase I program results on the design, development, and testing of the Phase II brassboard ladar will also be discussed.

  9. Characterization of scannerless ladar

    NASA Astrophysics Data System (ADS)

    Monson, Todd C.; Grantham, Jeffrey W.; Childress, Steve W.; Sackos, John T.; Nellums, Robert O.; Lebien, Steve M.

    1999-05-01

    Scannerless laser radar (LADAR) is the next revolutionary step in laser radar technology. It has the potential to dramatically increase the image frame rate over raster-scanned systems while eliminating mechanical moving parts. The system presented here uses a negative lens to diverge the light from a pulsed laser to floodlight illuminate a target. Return light is collected by a commercial camera lens, an image intensifier tube applies a modulated gain, and a relay lens focuses the resulting image onto a commercial CCD camera. To produce range data, a minimum of three snapshots is required while modulating the gain of the image intensifier tube's microchannel plate (MCP) at a MHz rate. Since November 1997 the scannerless LADAR designed by Sandia National Laboratories has undergone extensive testing. It has been taken on numerous field tests and has imaged calibrated panels up to a distance of 1 km on an outdoor range. Images have been taken at ranges over a kilometer and can be taken at much longer ranges with modified range gate settings. Sample imagery and potential applications are presented here. The accuracy of range imagery produced by this scannerless LADAR has been evaluated and the range resolution was found to be approximately 15 cm. Its sensitivity was also quantified and found to be many factors better than raster- scanned direct detection LADAR systems. Additionally, the effect of the number of snapshots and the phase spacing between them on the quality of the range data has been evaluated. Overall, the impressive results produced by scannerless LADAR are ideal for autonomous munitions guidance and various other applications.

  10. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  11. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  12. High Quality 3D data capture from UAV imagery

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Cramer, Michael; Rothermel, Mathias

    2014-05-01

    The flexible use of unmanned airborne systems is especially beneficial while aiming at data capture for geodetic-photogrammetric applications within areas of limited extent. This can include tasks like topographical mapping in the context of land management and consolidation or natural hazard mapping for the documentation of landslide areas. Our presentation discusses the suitability of UAV-systems for such tasks based on a pilot project for the Landesamt für Geoinformation und Landentwicklung Baden-Württemberg (LGL BW). This study evaluated the efficiency and accuracy of photogrammetric image collection by UAV-systems for demands of national mapping authorities. For this purpose the use of different UAV platforms and cameras for the generation of photogrammetric standard products like ortho images and digital surface models were evaluated. However, main focus of the presentation is the investigation of the quality potential of UAV-based 3D data capture at high resolution and accuracies. This is exemplary evaluated by the documentation of a small size (700x350m2) landslide area by a UAV flight. For this purpose the UAV images were used to generate 3D point clouds at a resolution of 5-8cm, which corresponds to the ground sampling distance GSD of the original images. This was realized by dense, pixel-wise matching algorithms both available in off-the-shelf and research software tools. Suitable results can especially be derived if large redundancy is available from highly overlapping image blocks. Since UAV images can be collected easily at a high overlap due to their low cruising speed. Thus, our investigations clearly demonstrated the feasibility of relatively simple UAV-platforms and cameras for 3D point determination close to the sub-pixel level.

  13. Automatic building detection and 3D shape recovery from single monocular electro-optic imagery

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Saeedi, Parvaneh; Dlugan, Andrew; Goldstein, Norman; Zwick, Harold

    2007-04-01

    The extraction of 3D building geometric information from high-resolution electro-optical imagery is becoming a key element in numerous geospatial applications. Indeed, producing 3D urban models is a requirement for a variety of applications such as spatial analysis of urban design, military simulation, and site monitoring of a particular geographic location. However, almost all operational approaches developed over the years for 3D building reconstruction are semiautomated ones, where a skilled human operator is involved in the 3D geometry modeling of building instances, which results in a time-consuming process. Furthermore, such approaches usually require stereo image pairs, image sequences, or laser scanning of a specific geographic location to extract the 3D models from the imagery. Finally, with current techniques, the 3D geometric modeling phase may be characterized by the extraction of 3D building models with a low accuracy level. This paper describes the Automatic Building Detection (ABD) system and embedded algorithms currently under development. The ABD system provides a framework for the automatic detection of buildings and the recovery of 3D geometric models from single monocular electro-optic imagery. The system is designed in order to cope with multi-sensor imaging of arbitrary viewpoint variations, clutter, and occlusion. Preliminary results on monocular airborne and spaceborne images are provided. Accuracy assessment of detected buildings and extracted 3D building models from single airborne and spaceborne monocular imagery of real scenes are also addressed. Embedded algorithms are evaluated for their robustness to deal with relatively dense and complicated urban environments.

  14. The Maintenance Of 3-D Scene Databases Using The Analytical Imagery Matching System (Aims)

    NASA Astrophysics Data System (ADS)

    Hovey, Stanford T.

    1987-06-01

    The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.

  15. Super-resolution for flash LADAR data

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Young, S. Susan; Hong, Tsai; Reynolds, Joseph P.; Krapels, Keith; Miller, Brian; Thomas, Jim; Nguyen, Oanh

    2009-05-01

    Flash laser detection and ranging (LADAR) systems are increasingly used in robotics applications for autonomous navigation and obstacle avoidance. Their compact size, high frame rate, wide field of view, and low cost are key advantages over traditional scanning LADAR devices. However, these benefits are achieved at the cost of spatial resolution. Super-resolution enhancement can be applied to improve the resolution of flash LADAR devices, making them ideal for small robotics applications. Previous work by Rosenbush et al. applied the super-resolution algorithm of Vandewalle et al. to flash LADAR data, and observed quantitative improvement in image quality in terms of number of edges detected. This study uses the super-resolution algorithm of Young et al. to enhance the resolution of range data acquired with a SwissRanger SR-3000 flash LADAR camera. To improve the accuracy of sub-pixel shift estimation, a wavelet preprocessing stage was developed and applied to flash LADAR imagery. The authors used the triangle orientation discrimination (TOD) methodology for a subjective evaluation of the performance improvement (measured in terms of probability of target discrimination and subject response times) achieved with super-resolution. Super-resolution of flash LADAR imagery resulted in superior probabilities of target discrimination at the all investigated ranges while reducing subject response times.

  16. On Fundamental Evaluation Using Uav Imagery and 3d Modeling Software

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Tamino, T.; Chikatsu, H.

    2016-06-01

    Unmanned aerial vehicles (UAVs), which have been widely used in recent years, can acquire high-resolution images with resolutions in millimeters; such images cannot be acquired with manned aircrafts. Moreover, it has become possible to obtain a surface reconstruction of a realistic 3D model using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan based on computer vision technology such as structure from motion and multi-view stereo. 3D modeling software has many applications. However, most of them seem to not have obtained appropriate accuracy control in accordance with the knowledge of photogrammetry and/or computer vision. Therefore, we performed flight tests in a test field using an UAV equipped with a gimbal stabilizer and consumer grade digital camera. Our UAV is a hexacopter and can fly according to the waypoints for autonomous flight and can record flight logs. We acquired images from different altitudes such as 10 m, 20 m, and 30 m. We obtained 3D reconstruction results of orthoimages, point clouds, and textured TIN models for accuracy evaluation in some cases with different image scale conditions using 3D modeling software. Moreover, the accuracy aspect was evaluated for different units of input image—course unit and flight unit. This paper describes the fundamental accuracy evaluation for 3D modeling using UAV imagery and 3D modeling software from the viewpoint of close-range photogrammetry.

  17. Advances in ladar components and subsystems at Raytheon

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Chapman, George; Edwards, John; Mc Keag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  18. Advances in LADAR Components and Subsystems at Raytheon

    NASA Technical Reports Server (NTRS)

    Jack, Michael; Chapman, George; Edwards, John; McKeag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 x 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 x 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  19. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  20. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  1. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  2. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  3. Flexible simulation strategy for modeling 3D cultural objects based on multisource remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Guienko, Guennadi; Levin, Eugene

    2003-01-01

    New ideas and solutions never come alone. Although automated feature extraction is not sufficiently mature to move from the realm of scientific investigation into the category of production technology, a new goal has arisen: 3D simulation of real-world objects, extracted from images. This task, which evolved from feature extraction and is not an easy task itself, becomes even more complex, multi-leveled, and often uncertain and fuzzy when one exploits time-sequenced multi-source remotely sensed visual data. The basic components of the process are familiar image processing tasks: fusion of various types of imagery, automatic recognition of objects, removng those objects from the source images, and replacing them in the images with their realistic simulated "twin" object rendering. This paper discusses how to aggregate the most appropriate approach to each task into one technological process in order to develop a Manipulator for Visual Simulation of 3D objects (ManVIS) that is independent or imagery/format/media. The technology could be made general by combining a number of competent special purpose algorithms under appropriate contextual, geometric, spatial, and temporal constraints derived from a-priori knowledge. This could be achieved by planning the simulation in an Open Structure Simulation Strategy Manager (O3SM) a distinct component of ManVIS building the simulation strategy before beginning actual image manipulation.

  4. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  5. Experiments with Uas Imagery for Automatic Modeling of Power Line 3d Geometry

    NASA Astrophysics Data System (ADS)

    Jóźków, G.; Vander Jagt, B.; Toth, C.

    2015-08-01

    The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.

  6. Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data

    NASA Astrophysics Data System (ADS)

    Ni, Nina; Chen, Ninghua; Chen, Jianyu

    2014-09-01

    High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.

  7. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  8. Spectral ladar as a UGV navigation sensor

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2011-06-01

    We demonstrate new results using our Spectral LADAR prototype, which highlight the benefits of this sensor for Unmanned Ground Vehicle (UGV) navigation applications. This sensor is an augmentation of conventional LADAR and uses a polychromatic source to obtain range-resolved 3D spectral point clouds. These point cloud images can be used to identify objects based on combined spatial and spectral features in three dimensions and at long standoff range. The Spectral LADAR transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Backscatter from distant targets is dispersed into 25 spectral bands, where each spectral band is independently range resolved with multiple return pulse recognition. Our new results show that Spectral LADAR can spectrally differentiate hazardous terrain (mud) from favorable driving surfaces (dry ground). This is a critical capability, since in UGV contexts mud is potentially hazardous, requires modified vehicle dynamics, and is difficult to identify based on 3D spatial signatures. Additionally, we demonstrate the benefits of range resolved spectral imaging, where highly cluttered 3D images of scenes (e.g. containing camouflage, foliage) are spectrally unmixed by range separation and segmented accordingly. Spectral LADAR can achieve this unambiguously and without the need for stereo correspondence, sub-pixel detection algorithms, or multi-sensor registration and data fusion.

  9. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  10. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  11. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  12. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  13. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  14. Multiaspect high-resolution ladar data collection

    NASA Astrophysics Data System (ADS)

    Trussel, C. Ward; Barr, Dallas N.; Schilling, Bradley W.; Templeton, Glen C.; Mizerka, Lawrence J.; Warner, Chris; Hummel, Robert; Hauge, Robert O.

    2003-08-01

    The Jigsaw program, sponsored by the Defense Advanced Research Projects Agency (DARPA), will demonstrate a multi-observation concept to identify obscured combat vehicles that cannot be discerned from a single aspect angle. Three-dimensional (3-D) laser radar (ladar) images of a nearly hidden target are collected from several observation points. Image pieces of the target taken from all the data sets are then assembled to obtain a more complete image that will allow identification by a human observer. In this effort a test bed ladar, constructed by the Night Vision and Electronic Sensors Directorate (NVESD), is used to provide three-dimensional (3-D) images in which the voxels have dimensions of the order of centimeters on each side. Ultimately a UAV born Jigsaw sensor will fly by a suspect location while collecting the multiple images. This paper will describe a simulated flight in which 800 images were taken of two targets obscured by foliage. The vehicle mounted laser radar used for the collection was moved in 0.076 meter steps along a 61 meter path. Survey data were collected for the sensor and target locations as well as for several unobscured fiducial markers near the targets, to aid in image reconstruction. As part of a separate DARPA contractual effort, target returns were extracted from individual images and assembled to form a final 3-D view of the vehicles for human identification. These results are reported separately. The laser radar employs a diode pumped, passively Q-switched, Nd:YAG, micro-chip laser. The transmitted 1.06 micron radiation was produced in six micro-joule pulses that occurred at a rate of 3 kHz and had a duration of 1.2 nanoseconds at the output of the detector electronics. An InGaAs avalanche photodiode/amplifier with a bandwidth of 0.5 GHz was used as the receiver and the signal was digitized at a rate of 2 GS/s. Details of the laser radar and sample imagery will be discussed and presented.

  15. Resolution limits in imaging LADAR systems

    NASA Astrophysics Data System (ADS)

    Khoury, Jed; Woods, Charles L.; Lorenzo, Joseph P.; Kierstead, John; Pyburn, Dana; Sengupta, S. K.

    2004-04-01

    In this paper, we introduce a new design concept of laser radar systems that combines both phase comparison and time-of-flight methods. We show from signal to noise ration considerations that there is a fundamental limit to the overall resolution in 3-D imaging range laser radar (LADAR). We introduce a new metric, volume of resolution (VOR), and we show from quantum noise considerations, that there is a maximum resolution volume, that can be achieved, for a given set of system parameters. Consequently, there is a direct tradeoff between range resolution and spatial resolution. Thus in a LADAR system, range resolution may be maximized at the expense of spatial image resolution and vice versa. We introduce resolution efficiency, ηr, as a new figure of merit for LADAR, that describes system resolution under the constraints of a specific design, compared to its optimal resolution performance derived from quantum noise considerations. We analyze how the resolution efficiency could be utilized to improve the resolution performance of a LADAR system. Our analysis could be extended to all LADAR systems, regardless of whether they are flash imaging or scanning laser systems.

  16. MEMS-scanned ladar sensor for small ground robots

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Giza, Mark M.; Jian, Pey-Schuan; Lawler, William B.; Nguyen, Hung M.; Sadler, Laurel C.

    2010-04-01

    The Army Research Laboratory (ARL) is researching a short-range ladar imager for small unmanned ground vehicles for navigation, obstacle/collision avoidance, and target detection and identification. To date, commercial ladars for this application have been flawed by one or more factors including, low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. In the prior year we conceived a scanned ladar design based on a newly developed but commercial MEMS mirror and a pulsed Erbium fiber laser. We initiated construction, and performed in-lab tests that validated the basic ladar architecture. This year we improved the transmitter and receiver modules and successfully tested a new low-cost and compact Erbium laser candidate. We further developed the existing software to allow adjustment of operating parameters on-the-fly and display of the imaged data in real-time. For our most significant achievement we mounted the ladar on an iRobot PackBot and wrote software to integrate PackBot and ladar control signals and ladar imagery on the PackBot's computer network. We recently remotely drove the PackBot over an inlab obstacle course while displaying the ladar data real-time over a wireless link. The ladar has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° x 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy). This paper will describe the ladar design and update progress in its development and performance.

  17. New developments in HgCdTe APDs and LADAR receivers

    NASA Astrophysics Data System (ADS)

    McKeag, William; Veeder, Tricia; Wang, Jinxue; Jack, Michael; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Cook, T. Dean; Amzajerdian, Farzin

    2011-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in two areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission.

  18. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  19. Meteoroid and debris special investigation group; status of 3-D crater analysis from binocular imagery

    NASA Technical Reports Server (NTRS)

    Sapp, Clyde A.; See, Thomas H.; Zolensky, Michael E.

    1992-01-01

    During the 3 month deintegration of the LDEF, the M&D SIG generated approximately 5000 digital color stereo image pairs of impact related features from all space exposed surfaces. Currently, these images are being processed at JSC to yield more accurate feature information. Work is currently underway to determine the minimum number of data points necessary to parametrically define impact crater morphologies in order to minimize the man-hour intensive task of tie point selection. Initial attempts at deriving accurate crater depth and diameter measurements from binocular imagery were based on the assumption that the crater geometries were best defined by paraboloid. We made no assumptions regarding the crater depth/diameter ratios but instead allowed each crater to define its own coefficients by performing a least-squares fit based on user-selected tiepoints. Initial test cases resulted in larger errors than desired, so it was decided to test our basic assumptions that the crater geometries could be parametrically defined as paraboloids. The method for testing this assumption was to carefully slice test craters (experimentally produced in an appropriate aluminum alloy) vertically through the center resulting in a readily visible cross-section of the crater geometry. Initially, five separate craters were cross-sectioned in this fashion. A digital image of each cross-section was then created, and the 2-D crater geometry was then hand-digitized to create a table of XY position for each crater. A 2nd order polynomial (parabolic) was fitted to the data using a least-squares approach. The differences between the fit equation and the actual data were fairly significant, and easily large enough to account for the errors found in the 3-D fits. The differences between the curve fit and the actual data were consistent between the caters. This consistency suggested that the differences were due to the fact that a parabola did not sufficiently define the generic crater geometry

  20. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  1. 3D exploitation of large urban photo archives

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  2. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  3. Extracting and analyzing micro-Doppler from ladar signatures

    NASA Astrophysics Data System (ADS)

    Tahmoush, Dave

    2015-05-01

    Ladar and other 3D imaging modalities have the capability of creating 3D micro-Doppler to analyze the micro-motions of human subjects. An additional capability to the recognition of micro-motion is the recognition of the moving part, such as the hand or arm. Combined with measured RCS values of the body, ladar imaging can be used to ground-truth the more sensitive radar micro-Doppler measurements and associate the moving part of the subject with the measured Doppler and RCS from the radar system. The 3D ladar signatures can also be used to classify activities and actions on their own, achieving an 86% accuracy using a micro-Doppler based classification strategy.

  4. Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.

    PubMed

    Jackson, Bret; Keefe, Daniel F

    2016-04-01

    Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface. PMID:26780801

  5. Dubai 3d Textuerd Mesh Using High Quality Resolution Vertical/oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Tayeb Madani, Adib; Ziad Ahmad, Abdullateef; Christoph, Lueken; Hammadi, Zamzam; Manal Abdullah Sabeal, Manal Abdullah x.

    2016-06-01

    Providing high quality 3D data with reasonable quality and cost were always essential, affording the core data and foundation for developing an information-based decision-making tool of urban environments with the capability of providing decision makers, stakeholders, professionals, and public users with 3D views and 3D analysis tools of spatial information that enables real-world views. Helps and assist in improving users' orientation and also increase their efficiency in performing their tasks related to city planning, Inspection, infrastructures, roads, and cadastre management. In this paper, the capability of multi-view Vexcel UltraCam Osprey camera images is examined to provide a 3D model of building façades using an efficient image-based modeling workflow adopted by commercial software's. The main steps of this work include: Specification, point cloud generation, and 3D modeling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on the images to generate point cloud. Then, a mesh model of points is calculated using and refined to obtain an accurate model of buildings. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough LoD2 details of the building based on visual assessment. The objective of this paper is neither comparing nor promoting a specific technique over the other and does not mean to promote a sensor-based system over another systems or mechanism presented in existing or previous paper. The idea is to share experience.

  6. Comprehensive high-speed simulation software for ladar systems

    NASA Astrophysics Data System (ADS)

    Kim, Seongjoon; Hwang, Seran; Son, Minsoo; Lee, Impyeong

    2011-11-01

    Simulation of LADAR systems is particularly important for the verification of the system design through the performance assessment. Although many researchers attempted to develop various kinds of LADAR simulators, most of them have some limitations in being practically used for the general design of diverse types of LADAR system. We thus attempt to develop high-speed simulation software that is applicable to different types of LADAR system. In summary, we analyzed the previous studies related to LADAR simulation and, based on those existing works, performed the sensor modeling in various aspects. For the high-speed operation, we incorporate time-efficient incremental coherent ray-tracing algorithms, 3D spatial database systems for efficient spatial query, and CUDA based parallel computing. The simulator is mainly composed of three modules: geometry, radiometry, and visualization modules. Regarding the experimental results, our simulation software could successfully generate the simulated data based on the pre-defined system parameters. The validation of simulation results is performed by the comparison with the real LADAR data, and the intermediate results are promising. We believe that the developed simulator can be widely useful for various fields.

  7. Assimilation of high resolution satellite imagery into the 3D-CMCC forest ecosystem model

    NASA Astrophysics Data System (ADS)

    Natali, S.; Collalti, A.; Candini, A.; Della Vecchia, A.; Valentini, R.

    2012-04-01

    The use of satellite observations for the accurate monitoring of the terrestrial biosphere has been carried out since the very early stage of remote sensing applications. The possibility to observe the ground surface with different wavelengths and different observation modes (namely active and passive observations) has given to the scientific community an invaluable tool for the observation of wide areas with a resolution down to the single tree. On the other hand, the continuous development of forest ecosystem models has permitted to perform simulations of complex ("natural") forest scenarios to evaluate forest status, forest growth and future dynamics. Both remote sensing and modelling forest assessment methods have advantages and disadvantages that could be overcome by the adoption of an integrated approach. In the framework of the European Space Agency Project KLAUS, high resolution optical satellite data has been integrated /assimilated into a forest ecosystem model (named 3D-CMCC) specifically developed for multi-specie, multi-age forests. 3D-CMCC permits to simulate forest areas with different forest layers, with different trees at different age on the same point. Moreover, the model permits to simulate management activities on the forest, thus evaluating the carbon stock evolution following a specific management scheme. The model has been modified including satellite data at 10m resolution, permitting the use of directly measured information, adding to the model the real phenological cycle of each simulated point. Satellite images have been collected by the JAXA ALOS-AVNIR-2 sensor. The integration schema has permitted to identify a spatial domain in which each pixel is characterised by a forest structure (species, ages, soil parameters), meteo-climatological parameters and estimated Leaf Area Index from satellite. The resulting software package (3D-CMCC-SAT) is built around 3D-CMCC: 2D / 3D input datasets are processed iterating on each point of the

  8. Single-photon sensitive Geiger-mode LADAR cameras

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2012-10-01

    Three-dimensional (3D) imaging with Short wavelength infrared (SWIR) Laser Detection and Range (LADAR) systems have been successfully demonstrated on various platforms. It has been quickly adopted in many military and civilian applications. In order to minimize the LADAR system size, weight, and power (SWAP), it is highly desirable to maximize the camera sensitivity. Recently Spectrolab has demonstrated a compact 32x32 LADAR camera with single photo-level sensitivity at 1064. This camera has many special features such as non-uniform bias correction, variable range gate width from 2 microseconds to 6 microseconds, windowing for smaller arrays, and short pixel protection. Boeing integrated this camera with a 1.06 μm pulse laser on various platforms and demonstrated 3D imaging. The features and recent test results of the 32x128 camera under development will be introduced.

  9. 3-D Raman Imagery and Atomic Force Microscopy of Ancient Microscopic Fossils

    NASA Astrophysics Data System (ADS)

    Schopf, J.

    2003-12-01

    Investigations of the Precambrian (~540- to ~3,500-Ma-old) fossil record depend critically on identification of authentic microbial fossils. Combined with standard paleontologic studies (e.g., of paleoecologic setting, population structure, cellular morphology, preservational variants), two techniques recently introduced to such studies -- Raman imagery and atomic force microscopy -- can help meet this need. Laser-Raman imagery is a non-intrusive, non-destructive technique that can be used to demonstrate a micron-scale one-to-one correlation between optically discernable morphology and the organic (kerogenous) composition of individual microbial fossils(1,2), a prime indicator of biogencity. Such analyses can be used to characterize the molecular-structural makeup of organic-walled microscopic fossils both in acid-resistant residues and in petrographic thin sections, and whether the fossils analyzed are exposed at the upper surface of, or are embedded within (to depths >65 microns), the section studied. By providing means to map chemically, in three dimensions, whole fossils or parts of such fossils(3), Raman imagery can also show the presence of cell lumina, interior cellular cavities, another prime indicator of biogenicity. Atomic force microscopy (AFM) has been used to visualize the nanometer-scale structure of the kerogenous components of single Precambrian microscopic fossils(4). Capable of analyzing minute fragments of ancient organic matter exposed at the upper surface of thin sections (or of kerogen particles deposited on flat surfaces), such analyses hold promise not only for discriminating between biotic and abiotic micro-objects but for elucidation of the domain size -- and, thus, the degree of graphitization -- of the graphene subunits of the carbonaceous matter analyzed. These techniques -- both new to paleobiology -- can provide useful insight into the biogenicity and geochemical maturity of ancient organic matter. References: (1) Kudryavtsev, A.B. et

  10. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  11. The Maradi fault zone: 3-D imagery of a classic wrench fault in Oman

    SciTech Connect

    Neuhaus, D. )

    1993-09-01

    The Maradi fault zone extends for almost 350 km in a north-northwest-south-southeast direction from the Oman Mountain foothills into the Arabian Sea, thereby dissecting two prolific hydrocarbon provinces, the Ghaba and Fahud salt basins. During its major Late Cretaceous period of movement, the Maradi fault zone acted as a left-lateral wrench fault. An early exploration campaign based on two-dimensional seismic targeted at fractured Cretaceous carbonates had mixed success and resulted in the discovery of one producing oil field. The structural complexity, rapidly varying carbonate facies, and uncertain fracture distribution prevented further drilling activity. In 1990 a three-dimensional (3-D) seismic survey covering some 500 km[sup 2] was acquired over the transpressional northern part of the Maradi fault zone. The good data quality and the focusing power of 3-D has enabled stunning insight into the complex structural style of a [open quotes]textbook[close quotes] wrench fault, even at deeper levels and below reverse faults hitherto unexplored. Subtle thickness changes within the carbonate reservoir and the unconformably overlying shale seal provided the tool for the identification of possible shoals and depocenters. Horizon attribute maps revealed in detail the various structural components of the wrench assemblage and highlighted areas of increased small-scale faulting/fracturing. The results of four recent exploration wells will be demonstrated and their impact on the interpretation discussed.

  12. Stereo 3-D Imagery Uses for Definition of Geologic Structures and Geomorphic Features (Anaglyph colored glasses employed)

    NASA Astrophysics Data System (ADS)

    Hicks, B. G.; Fuente, J. D.

    2008-12-01

    Recently completed projects incorporating TopoMorpher* digital images as adjuncts to commonly employed tools has emphasized the distinct advantage gained with STEREO 3-D DIGITAL IMAGERY. By manipulating scale, relief (four types of digital shading), sun angle, direction of viewing and tilt of scene, etc. -- to produce differing views of the same terrain -- aids in identifying, tracing, and interpreting ground surface anomalies. *TopoMorpher is a digital software product of Eighteen Software (18 software.com). The advantage of Stereo 3-D views combined with digital removal of vegetation which blocked interpretation (commonly called 'bare earth/naked' views) cannot be over-emphasized. The TopoMorpher program creates scenes transferable to disk for printing at any size. Included is with computer projector which allows large display and discussion ease for groups. The examples include (1) fault systems for targeting water well locations in bedrock and (2) delineation of debris slide and avalanche terrain. Combining geologic mapping and spring locations with Stereo 3-D TopoMorpher tracing of fault lineaments has allowed targeting of water well drilling sites. Selection of geophysical study areas for well siting has been simplified. Stereo 3-D TopoMorpher has a specific "relief/terrain setting" to define potential failure sites by producing detailed colored slope maps keyed to field-data derived parameters. Posters display individual project images and large scale overviews for identifying unusual major terrain features. Images at scales using 10 and 30 meter digital data as well as Lidar (< 1 meter) will be shown.

  13. Initial Results of 3D Topographic Mapping Using Lunar Reconnaissance Orbiter Camera (LROC) Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Li, R.; Oberst, J.; McEwen, A. S.; Archinal, B. A.; Beyer, R. A.; Thomas, P. C.; Chen, Y.; Hwangbo, J.; Lawver, J. D.; Scholten, F.; Mattson, S. S.; Howington-Kraus, A. E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO), launched June 18, 2009, carries the Lunar Reconnaissance Orbiter Camera (LROC) as one of seven remote sensing instruments on board. The camera system is equipped with a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NAC) for systematic lunar surface mapping and detailed site characterization for potential landing site selection and resource identification. The LROC WAC is a pushframe camera with five 14-line by 704-sample framelets for visible light bands and two 16-line by 512-sample (summed 4x to 4 by 128) UV bands. The WAC can also acquire monochrome images with a 14-line by 1024-sample format. At the nominal 50-km orbit the visible bands ground scale is 75-m/pixel and the UV 383-m/pixel. Overlapping WAC images from adjacent orbits can be used to map topography at a scale of a few hundred meters. The two panchromatic NAC cameras are pushbroom imaging sensors each with a Cassegrain telescope of a 700-mm focal length. The two NAC cameras are aligned with a small overlap in the cross-track direction so that they cover a 5-km swath with a combined field-of-view (FOV) of 5.6°. At an altitude of 50-km, the NAC can provide panchromatic images from its 5,000-pixel linear CCD at a ground scale of 0.5-m/pixel. Calibration of the cameras was performed by using precision collimator measurements to determine the camera principal points and radial lens distortion. The orientation of the two NAC cameras is estimated by a boresight calibration using double and triple overlapping NAC images of the lunar surface. The resulting calibration results are incorporated into a photogrammetric bundle adjustment (BA), which models the LROC camera imaging geometry, in order to refine the exterior orientation (EO) parameters initially retrieved from the SPICE kernels. Consequently, the improved EO parameters can significantly enhance the quality of topographic products derived from LROC NAC imagery. In addition, an analysis of the spacecraft

  14. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  15. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  16. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  17. Learning structured models for segmentation of 2-D and 3-D imagery.

    PubMed

    Lucchi, Aurelien; Marquez-Neila, Pablo; Becker, Carlos; Li, Yunpeng; Smith, Kevin; Knott, Graham; Fua, Pascal

    2015-05-01

    Efficient and accurate segmentation of cellular structures in microscopic data is an essential task in medical imaging. Many state-of-the-art approaches to image segmentation use structured models whose parameters must be carefully chosen for optimal performance. A popular choice is to learn them using a large-margin framework and more specifically structured support vector machines (SSVM). Although SSVMs are appealing, they suffer from certain limitations. First, they are restricted in practice to linear kernels because the more powerful nonlinear kernels cause the learning to become prohibitively expensive. Second, they require iteratively finding the most violated constraints, which is often intractable for the loopy graphical models used in image segmentation. This requires approximation that can lead to reduced quality of learning. In this paper, we propose three novel techniques to overcome these limitations. We first introduce a method to "kernelize" the features so that a linear SSVM framework can leverage the power of nonlinear kernels without incurring much additional computational cost. Moreover, we employ a working set of constraints to increase the reliability of approximate subgradient methods and introduce a new way to select a suitable step size at each iteration. We demonstrate the strength of our approach on both 2-D and 3-D electron microscopic (EM) image data and show consistent performance improvement over state-of-the-art approaches. PMID:25438309

  18. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  19. Quantification of gully volume using very high resolution DSM generated through 3D reconstruction from airborne and field digital imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Zarco-Tejada, Pablo; Laredo, Mario; Gómez, Jose Alfonso

    2013-04-01

    Major advances have been made recently in automatic 3D photo-reconstruction techniques using uncalibrated and non-metric cameras (James and Robson, 2012). However, its application on soil conservation studies and landscape feature identification is currently at the outset. The aim of this work is to compare the performance of a remote sensing technique using a digital camera mounted on an airborne platform, with 3D photo-reconstruction, a method already validated for gully erosion assessment purposes (Castillo et al., 2012). A field survey was conducted in November 2012 in a 250 m-long gully located in field crops on a Vertisol in Cordoba (Spain). The airborne campaign was conducted with a 4000x3000 digital camera installed onboard an aircraft flying at 300 m above ground level to acquire 6 cm resolution imagery. A total of 990 images were acquired over the area ensuring a large overlap in the across- and along-track direction of the aircraft. An ortho-mosaic and the digital surface model (DSM) were obtained through automatic aerial triangulation and camera calibration methods. For the field-level photo-reconstruction technique, the gully was divided in several reaches to allow appropriate reconstruction (about 150 pictures taken per reach) and, finally, the resulting point clouds were merged into a unique mesh. A centimetric-accuracy GPS provided a benchmark dataset for gully perimeter and distinguishable reference points in order to allow the assessment of measurement errors of the airborne technique and the georeferenciation of the photo-reconstruction 3D model. The uncertainty on the gully limits definition was explicitly addressed by comparison of several criteria obtained by 3D models (slope and second derivative) with the outer perimeter obtained by the GPS operator identifying visually the change in slope at the top of the gully walls. In this study we discussed the magnitude of planimetric and altimetric errors and the differences observed between the

  20. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  1. 3D Case Studies of Monitoring Dynamic Structural Tests using Long Exposure Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, D. M. J.; Chandler, J. H.; Palmeri, A.

    2014-06-01

    Structural health monitoring uses non-destructive testing programmes to detect long-term degradation phenomena in civil engineering structures. Structural testing may also be carried out to assess a structure's integrity following a potentially damaging event. Such investigations are increasingly carried out with vibration techniques, in which the structural response to artificial or natural excitations is recorded and analysed from a number of monitoring locations. Photogrammetry is of particular interest here since a very high number of monitoring locations can be measured using just a few images. To achieve the necessary imaging frequency to capture the vibration, it has been necessary to reduce the image resolution at the cost of spatial measurement accuracy. Even specialist sensors are limited by a compromise between sensor resolution and imaging frequency. To alleviate this compromise, a different approach has been developed and is described in this paper. Instead of using high-speed imaging to capture the instantaneous position at each epoch, long-exposure images are instead used, in which the localised image of the object becomes blurred. The approach has been extended to create 3D displacement vectors for each target point via multiple camera locations, which allows the simultaneous detection of transverse and torsional mode shapes. The proposed approach is frequency invariant allowing monitoring of higher modal frequencies irrespective of a sampling frequency. Since there is no requirement for imaging frequency, a higher image resolution is possible for the most accurate spatial measurement. The results of a small scale laboratory test using off-the-shelf consumer cameras are demonstrated. A larger experiment also demonstrates the scalability of the approach.

  2. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  3. Flight test results of ladar brownout look-through capability

    NASA Astrophysics Data System (ADS)

    Stelmash, Stephen; Münsterer, Thomas; Kramper, Patrick; Samuelis, Christian; Bühler, Daniel; Wegner, Matthias; Sheth, Sagar

    2015-06-01

    The paper discusses recent results of flight tests performed with the Airbus Defence and Space ladar system at Yuma Proving Grounds. The ladar under test was the SferiSense® system which is in operational use as an in-flight obstacle warning and avoidance system on the NH90 transport helicopter. Just minor modifications were done on the sensor firmware to optimize its performance in brownout. Also a new filtering algorithm fitted to segment dust artefacts out of the collected 3D data in real-time was employed. The results proved that this ladar sensor is capable to detect obstacles through brownout dust clouds with a depth extending up to 300 meters from the landing helicopter.

  4. Mapping tropical biodiversity using spectroscopic imagery : characterization of structural and chemical diversity with 3-D radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Feret, J. B.; Gastellu-Etchegorry, J. P.; Lefèvre-Fonollosa, M. J.; Proisy, C.; Asner, G. P.

    2014-12-01

    The accelerating loss of biodiversity is a major environmental trend. Tropical ecosystems are particularly threatened due to climate change, invasive species, farming and natural resources exploitation. Recent advances in remote sensing of biodiversity confirmed the potential of high spatial resolution spectroscopic imagery for species identification and biodiversity mapping. Such information bridges the scale-gap between small-scale, highly detailed field studies and large-scale, low-resolution satellite observations. In order to produce fine-scale resolution maps of canopy alpha-diversity and beta-diversity of the Peruvian Amazonian forest, we designed, applied and validated a method based on spectral variation hypothesis to CAO AToMS (Carnegie Airborne Observatory Airborne Taxonomic Mapping System) images, acquired from 2011 to 2013. There is a need to understand on a quantitative basis the physical processes leading to this spectral variability. This spectral variability mainly depends on canopy chemistry, structure, and sensor's characteristics. 3D radiative transfer modeling provides a powerful framework for the study of the relative influence of each of these factors in dense and complex canopies. We simulated series of spectroscopic images with the 3D radiative model DART, with variability gradients in terms of leaf chemistry, individual tree structure, spatial and spectral resolution, and applied methods for biodiversity mapping. This sensitivity study allowed us to determine the relative influence of these factors on the radiometric signal acquired by different types of sensors. Such study is particularly important to define the domain of validity of our approach, to refine requirements for the instrumental specifications, and to help preparing hyperspectral spatial missions to be launched at the horizon 2015-2025 (EnMAP, PRISMA, HISUI, SHALOM, HYSPIRI, HYPXIM). Simulations in preparation include topographic variations in order to estimate the robustness

  5. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3d Reconstruction of the Destroyed Bel Temple in Palmyra

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.; Fangi, G.

    2016-06-01

    This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.

  6. Brassboard development of a MEMS-scanned ladar sensor for small ground robots

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Enke, Joseph A.; Jian, Pey-Schuan; Giza, Mark M.; Lawler, William B.; Powers, Michael A.

    2011-06-01

    The Army Research Laboratory (ARL) is researching a short-range ladar imager for navigation, obstacle/collision avoidance, and target detection/identification on small unmanned ground vehicles (UGV).To date, commercial UGV ladars have been flawed by one or more factors including low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. ARL built a breadboard ladar based on a newly developed but commercially available micro-electro-mechanical system (MEMS) mirror coupled to a lowcost pulsed Erbium fiber laser transmitter that largely addresses these problems. Last year we integrated the ladar and associated control software on an iRobot PackBot and distributed the ladar imagery data via the PackBot's computer network. The un-tethered PackBot was driven through an indoor obstacle course while displaying the ladar data realtime on a remote laptop computer over a wireless link. We later conducted additional driving experiments in cluttered outdoor environments. This year ARL partnered with General Dynamics Robotics Systems to start construction of a brass board ladar design. This paper will discuss refinements and rebuild of the various subsystems including the transmitter and receiver module, the data acquisition and data processing board, and software that will lead to a more compact, lower cost, and better performing ladar. The current ladar breadboard has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° × 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy).

  7. Study on key techniques for synthetic aperture ladar system

    NASA Astrophysics Data System (ADS)

    Cao, Changqing; Zeng, Xiaodong; Feng, Zhejun; Zhang, Wenrui; Su, Lei

    2008-03-01

    The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. Because of many advantages, LADAR based on synthetic aperture theory is becoming research hotspot and practicality. Synthetic Aperture LADAR (SAL) technology satisfies the critical need for reliable, long-range battlefield awareness. An image that takes radar tens of seconds to produce can be produced in a few thousands of a second at optical frequencies. While radar waves respond to macroscopic features such as corners, edges, and facets, laser waves interact with microscopic surface characteristics, which results in imagery that appears more familiar and is more easily interpreted. SAL could provide high resolution optical/infrared imaging. In the present paper we have tried to answer three questions: (1) the process of collecting the samples over the large "synthetic" aperture; (2) differences between SAR and SAL; (3) the key techniques for SAL system. The principle and progress of SAL are introduced and a typical SAL system is described. Beam stabilization, chirp laser, and heterodyne detection, which are among the most challenging aspects of SAL, are discussed in detail.

  8. Research on key technologies of LADAR echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Ye, Jiansen; Wang, Xin; Li, Zhuo

    2015-10-01

    LADAR echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR, which is designed to simulate the LADAR return signal in laboratory conditions. The device can provide the laser echo signal of target and background for imaging LADAR systems to test whether it is of good performance. Some key technologies are investigated in this paper. Firstly, the 3D model of typical target is built, and transformed to the data of the target echo signal based on ranging equation and targets reflection characteristics. Then, system model and time series model of LADAR echo signal simulator are established. Some influential factors which could induce fixed delay error and random delay error on the simulated return signals are analyzed. In the simulation system, the signal propagating delay of circuits and the response time of pulsed lasers are belong to fixed delay error. The counting error of digital delay generator, the jitter of system clock and the desynchronized between trigger signal and clock signal are a part of random delay error. Furthermore, these system insertion delays are analyzed quantitatively, and the noisy data are obtained. The target echo signals are got by superimposing of the noisy data and the pure target echo signal. In order to overcome these disadvantageous factors, a method of adjusting the timing diagram of the simulation system is proposed. Finally, the simulated echo signals are processed by using a detection algorithm to complete the 3D model reconstruction of object. The simulation results reveal that the range resolution can be better than 8 cm.

  9. Airborne ladar man-in-the-loop operations in tactical environments

    NASA Astrophysics Data System (ADS)

    Grobmyer, Joseph E., Jr.; Lum, Tommy; Morris, Robert E.; Hard, Sarah J.; Pratt, H. L.; Florence, Tom; Peddycoart, Ed

    2004-09-01

    The U.S. Army Research, Development and Engineering Command (RDECOM) is developing approaches and processes that will exploit the characteristics of current and future Laser Radar (LADAR) sensor systems for critical man-in-the-loop tactical processes. The importance of timely and accurate target detection, classification, identification, and engagement for future combat systems has been documented and is viewed as a critical enabling factor for FCS survivability and lethality. Recent work has demonstrated the feasibility of using low cost but relatively capable personal computer class systems to exploit the information available in Ladar sensor frames to present the war fighter or analyst with compelling and usable imagery for use in the target identification and engagement processes in near real time. The advantages of LADAR imagery are significant in environments presenting cover for targets and the associated difficulty for automated target recognition (ATR) technologies.

  10. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  11. Low-cost ladar imagers

    NASA Astrophysics Data System (ADS)

    Vasile, S.; Lipson, J.

    2008-04-01

    We have developed low-cost LADAR imagers using photon-counting Geiger avalanche photodiode (GPD) arrays, signal amplification and conditioning interface with integrated active quenching circuits (AQCs) and readout integrated circuit (ROIC) arrays for time to digital conversion (TDC) implemented in FPGA. Our goal is to develop a compact, low-cost LADAR receiver that could be operated with room temperature Si-GPD arrays and cooled InGaAs GPD arrays. We report on architecture selection criteria, integration issues of the GPD, AQC and TDC, gating and programmable features for flexible and low-cost re-configuration, as well as on timing resolution, precision and accuracy of our latest LADAR designs.

  12. 3D Visualisation and Artistic Imagery to Enhance Interest in "Hidden Environments"--New Approaches to Soil Science

    ERIC Educational Resources Information Center

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-01-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…

  13. Generation of 3D Model for Urban area using Ikonos and Cartosat-1 Satellite Imageries with RS and GIS Techniques

    NASA Astrophysics Data System (ADS)

    Rajpriya, N. R.; Vyas, A.; Sharma, S. A.

    2014-11-01

    Urban design is a subject that is concerned with the shape, the surface and its physical arrangement of all kinds of urban elements. Although urban design is a practice process and needs much detailed and multi-dimensional description. 3D city models based spatial analysis gives the possibility of solving these problems. Ahmedabad is third fastest growing cities in the world with large amount of development in infrastructure and planning. The fabric of the city is changing and expanding at the same time, which creates need of 3d visualization of the city to develop a sustainable planning for the city. These areas have to be monitored and mapped on a regular basis and satellite remote sensing images provide a valuable and irreplaceable source for urban monitoring. With this, the derivation of structural urban types or the mapping of urban biotopes becomes possible. The present study focused at development of technique for 3D modeling of buildings for urban area analysis and to implement encoding standards prescribed in "OGC City GML" for urban features. An attempt has been to develop a 3D city model with level of details 1 (LOD 1) for part of city of Ahmedabad in State of Gujarat, India. It shows the capability to monitor urbanization in 2D and 3D.

  14. Geiger-mode ladar cameras

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Boisvert, Joseph; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison; Van Duyne, Stephen; Pauls, Greg; Gaalema, Stephen

    2011-06-01

    The performance of Geiger-mode LAser Detection and Ranging (LADAR) cameras is primarily defined by individual pixel attributes, such as dark count rate (DCR), photon detection efficiency (PDE), jitter, and crosstalk. However, for the expanding LADAR imaging applications, other factors, such as image uniformity, component tolerance, manufacturability, reliability, and operational features, have to be considered. Recently we have developed new 32×32 and 32×128 Read-Out Integrated Circuits (ROIC) for LADAR applications. With multiple filter and absorber structures, the 50-μm-pitch arrays demonstrate pixel crosstalk less than 100 ppm level, while maintaining a PDE greater than 40% at 4 V overbias. Besides the improved epitaxial and process uniformity of the APD arrays, the new ROICs implement a Non-uniform Bias (NUB) circuit providing 4-bit bias voltage tunability over a 2.5 V range to individually bias each pixel. All these features greatly increase the performance uniformity of the LADAR camera. Cameras based on these ROICs were integrated with a data acquisition system developed by Boeing DES. The 32×32 version has a range gate of up to 7 μs and can cover a range window of about 1 km with 14-bit and 0.5 ns timing resolution. The 32×128 camera can be operated at a frame rate of up to 20 kHz with 0.3 ns and 14-bit time resolution through a full CameraLink. The performance of the 32×32 LADAR camera has been demonstrated in a series of field tests on various vehicles.

  15. Geological interpretation and analysis of surface based, spatially referenced planetary imagery data using PRoGIS 2.0 and Pro3D.

    NASA Astrophysics Data System (ADS)

    Barnes, R.; Gupta, S.; Giordano, M.; Morley, J. G.; Muller, J. P.; Tao, Y.; Sprinks, J.; Traxler, C.; Hesina, G.; Ortner, T.; Sander, K.; Nauschnegg, B.; Paar, G.; Willner, K.; Pajdla, T.

    2015-10-01

    We apply the capabilities of the geospatial environment PRoGIS 2.0 and the real time rendering viewer PRo3D to geological analysis of NASA's Mars Exploration Rover-B (MER-B Opportunity rover) and Mars Science Laboratory (MSL Curiosity rover) datasets. Short baseline and serendipitous long baseline stereo Pancam rover imagery are used to create 3D point clouds which can be combined with super-resolution images derived from Mars Reconnaissance Orbiter HiRISE orbital data, andsuper-resolution outcrop images derived from MER Pancam, as well as hand-lens scale images for geology and outcrop characterization at all scales. Data within the PRoViDE database are presented and accessed through the PRoGIS interface. Simple geological measurement tools are implemented within the PRoGIS and PRo3D web software to accurately measure the dip and strike of bedding in outcrops, create detailed stratigraphic logs for correlation between the areas investigated, and to develop realistic 3D models for the characterization of planetary surface processes. Annotation tools are being developed to aid discussion and dissemination of the observations within the planetary science community.

  16. Optimization of space borne imaging ladar sensor for asteroid studies using parameter design

    NASA Astrophysics Data System (ADS)

    Wheel, Peter J.; Dobbs, Michael E.; Sharp, William E.

    2002-10-01

    Imaging LADAR is a hybrid technology that offers the ability to measure basic physical and morphological characteristics (topography, rotational state, and density) of a small body from a single fast flyby, without requiring months in orbit. In addition, the imaging LADAR provides key flight navigation information including range, altitude, hazard/target avoidance, and closed-loop landing/fly-by navigation information. The Near Laser Ranger demonstrated many of these capabilities as part of the NEAR mission. The imaging LADAR scales the concept of a laser ranger into a full 3D imager. Imaging LADAR systems combine laser illumination of the target (which means that imaging is independent of solar illumination and the image SNR is controlled by the observer), with laser ranging and imaging (producing high resolution 3D images in a fraction of the time necessary for a passive imager). The technical concept described below alters the traditional design space (dominated by pulsed LADAR systems) with the introduction of a pseudo-noise (PN) coded continuous wave (CW) laser system which allows for variable range resolution mapping and leverages enormous commercial investments in high power, long-life lasers for telecommunications.

  17. Forest Inventory Attribute Estimation Using Airborne Laser Scanning, Aerial Stereo Imagery, Radargrammetry and Interferometry-Finnish Experiences of the 3d Techniques

    NASA Astrophysics Data System (ADS)

    Holopainen, M.; Vastaranta, M.; Karjalainen, M.; Karila, K.; Kaasalainen, S.; Honkavaara, E.; Hyyppä, J.

    2015-03-01

    Three-dimensional (3D) remote sensing has enabled detailed mapping of terrain and vegetation heights. Consequently, forest inventory attributes are estimated more and more using point clouds and normalized surface models. In practical applications, mainly airborne laser scanning (ALS) has been used in forest resource mapping. The current status is that ALS-based forest inventories are widespread, and the popularity of ALS has also raised interest toward alternative 3D techniques, including airborne and spaceborne techniques. Point clouds can be generated using photogrammetry, radargrammetry and interferometry. Airborne stereo imagery can be used in deriving photogrammetric point clouds, as very-high-resolution synthetic aperture radar (SAR) data are used in radargrammetry and interferometry. ALS is capable of mapping both the terrain and tree heights in mixed forest conditions, which is an advantage over aerial images or SAR data. However, in many jurisdictions, a detailed ALS-based digital terrain model is already available, and that enables linking photogrammetric or SAR-derived heights to heights above the ground. In other words, in forest conditions, the height of single trees, height of the canopy and/or density of the canopy can be measured and used in estimation of forest inventory attributes. In this paper, first we review experiences of the use of digital stereo imagery and spaceborne SAR in estimation of forest inventory attributes in Finland, and we compare techniques to ALS. In addition, we aim to present new implications based on our experiences.

  18. JAVA implemented MSE optimal bit-rate allocation applied to 3-D hyperspectral imagery using JPEG2000 compression

    NASA Astrophysics Data System (ADS)

    Melchor, J. L., Jr.; Cabrera, S. D.; Aguirre, A.; Kosheleva, O. M.; Vidal, E., Jr.

    2005-08-01

    This paper describes an efficient algorithm and its Java implementation for a recently developed mean-squared error (MSE) rate-distortion optimal (RDO) inter-slice bit-rate allocation (BRA) scheme applicable to the JPEG2000 Part 2 (J2KP2) framework. Its performance is illustrated on hyperspectral imagery data using the J2KP2 with the Karhunen- Loeve transform (KLT) for decorrelation. The results are contrasted with those obtained using the traditional logvariance based BRA method and with the original RDO algorithm. The implementation has been developed as a Java plug-in to be incorporated into our evolving multi-dimensional data compression software tool denoted CompressMD. The RDO approach to BRA uses discrete rate distortion curves (RDCs) for each slice of transform coefficients. The generation of each point on a RDC requires a full decompression of that slice, therefore, the efficient version minimizes the number of RDC points needed from each slice by using a localized coarse-to-fine approach denoted RDOEfficient. The scheme is illustrated in detail using a subset of 10 bands of hyperspectral imagery data and is contrasted to the original RDO implementation and the traditional (log-variance) method of BRA showing that better results are obtained with the RDO methods. The three schemes are also tested on two hyperspectral imagery data sets with all bands present: the Cuprite radiance data from AVIRIS and a set derived from the Hyperion satellite. The results from the RDO and RDOEfficient are very close to each other in the MSE sense indicating that the adaptive approach can find almost the same BRA solution. Surprisingly, the traditional method also performs very close to the RDO methods, indicating that it is very close to being optimal for these types of data sets.

  19. Initial progress in the recording of crime scene simulations using 3D laser structured light imagery techniques for law enforcement and forensic applications

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Monson, Keith L.

    1998-03-01

    Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a

  20. Lossless to lossy compression for hyperspectral imagery based on wavelet and integer KLT transforms with 3D binary EZW

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-05-01

    In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.

  1. Deep space LADAR, phase 1

    NASA Astrophysics Data System (ADS)

    Frey, Randy W.; Rawlins, Greg; Zepkin, Neil; Bohlin, John

    1989-03-01

    A pseudo-ranging laser radar (PRLADAR) concept is proposed to provide extended range capability to tracking LADAR systems meeting the long-range requirements of SDI mission scenarios such as the SIE midcourse program. The project will investigate the payoff of several transmitter modulation techniques and a feasibility demonstration using a breadboard implementation of a new receiver concept called the Phase Multiplexed Correlator (PMC) will be accomplished. The PRLADAR concept has specific application to spaceborne LADAR tracking missions where increased CNR/SNR performance gained by the proposed technique may reduce the laser power and/or optical aperture requirement for a given mission. The reduction in power/aperture has similar cost reduction advantages in commercial ranging applications. A successful Phase 1 program will lay the groundwork for a quick reaction upgrade to the AMOS/LASE system in support of near term SIE measurement objectives.

  2. 3D visualisation and artistic imagery to enhance interest in `hidden environments' - new approaches to soil science

    NASA Astrophysics Data System (ADS)

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-09-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke 'soil atlas' was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets of artistic illustrations were produced, each set showing the effects of soil organic-matter density and water content on fungal density, to determine potential for visualisations and interactivity in stimulating interest in soil and soil illustrations, interest being an important factor in facilitating learning. The illustrations were created using 3D modelling packages, and a wide range of styles were produced. This allowed a preliminary study of the relative merits of different artistic styles, scientific-credibility, scale, abstraction and 'realism' (e.g. photo-realism or realism of forms), and any relationship between these and the level of interest indicated by the study participants in the soil visualisations and VE. The study found significant differences in mean interest ratings for different soil illustration styles, as well as in the perception of scientific-credibility of these styles, albeit for both measures there was considerable difference of attitude between participants about particular styles. There was also found to be a highly significant positive correlation between participants rating styles highly for interest and highly for scientific-credibility. There was furthermore a particularly high interest rating among participants for seeing temporal soil processes illustrated/animated, suggesting this as a particularly promising method for further stimulating interest in soil illustrations and soil itself.

  3. Use of stereoscopic satellite imagery for 3D mapping of bedrock structure in West Antarctica: An example from the northern Ford Ranges

    NASA Astrophysics Data System (ADS)

    Contreras, A.; Siddoway, C. S.; Porter, C.; Gottfried, M.

    2012-12-01

    In coastal West Antarctica, crustal-scale faults have been minimally mapped using traditional ground-based methods but regional scale structures are inferred mainly on the basis of low resolution potential fields data from airborne geophysical surveys (15 km flightline spacing). We use a new approach to detailed mapping of faults, shear zones, and intrusive relationships using panchromatic and multispectral imagery draped upon a digital elevation model (DEM). Our work focuses on the Fosdick Mountains, a culmination of lower middle crustal rocks exhumed at c. 100 Ma by dextral oblique detachment faulting. Ground truth exists for extensive areas visited during field studies in 2005-2011, providing a basis for spectral analysis of 8-band WorldView-02 imagery for detailed mapping of complex granite- migmatite relationships on the north side of the Fosdick range. A primary aim is the creation of a 3D geological map using the results of spectral analysis merged with a DEM computed from a stereographic pair of high resolution panchromatic images (sequential scenes, acquired 45 seconds apart). DEMs were computed using ERDAS Imagine™ LPS eATE, refined by MATLAB-based interpolation scripts to remove artifacts in the terrain model according to procedures developed by the Polar Geospatial Center (U. Minnesota). Orthorectified satellite imagery that covers the area of the DEMs was subjected to principal component analysis in ESRI ArcGIS™ 10.1, then the different rock types were identified using various combinations of spectral bands in order to map the geology of rock exposures that could not be accessed directly from the ground. Renderings in 3D of the satellite scenes draped upon the DEMs were created using Global Mapper™. The 3D perspective views reveal structural and geological features that are not observed in either the DEM nor the satellite imagery alone. The detailed map is crucial for an ongoing petrological / geochemical investigation of Cretaceous crustal

  4. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  5. Preliminary Pseudo 3-D Imagery of the State Line Fault, Stewart Valley, Nevada Using Seismic Reflection Data

    NASA Astrophysics Data System (ADS)

    Saldaña, S. C.; Snelson, C. M.; Taylor, W. J.; Beachly, M.; Cox, C. M.; Davis, R.; Stropky, M.; Phillips, R.; Robins, C.; Cothrun, C.

    2007-12-01

    The Pahrump Fault system is located in the central Basin and Range region and consists of three main fault zones: the Nopah range front fault zone, the State Line fault zone and the Spring Mountains range fault zone. The State Line fault zone is made up north-west trending dextral strike-slip faults that run parallel to the Nevada- California border. Previous geologic and geophysical studies conducted in and around Stewart Valley, located ~90 km from Las Vegas, Nevada, have constrained the location of the State Line fault zone to within a few kilometers. The goals of this project were to use seismic methods to definitively locate the northwestern most trace of the State Line fault and produce pseudo 3-D seismic cross-sections that can then be used to characterize the subsurface geometry and determine the slip of the State Line fault. During July 2007, four seismic lines were acquired in Stewart Valley: two normal and two parallel to the mapped traces of the State Line fault. Presented here are preliminary results from the two seismic lines acquired normal to the fault. These lines were acquired utilizing a 144-channel geode system with each of the 4.5 Hz vertical geophones set out at 5 m intervals to produce a 595 m long profile to the north and a 715 m long profile to the south. The vibroseis was programmed to produce an 8 s linear sweep from 20-160 Hz. These data returned excellent signal to noise and reveal subsurface lithology that will subsequently be used to resolve the subsurface geometry of the State Line fault. This knowledge will then enhance our understanding of the evolution of the State Line fault. Knowing how the State Line fault has evolved gives insight into the stick-slip fault evolution for the region and may improve understanding of how stress has been partitioned from larger strike-slip systems such as the San Andreas fault.

  6. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  7. Detection and delineation of buildings from airborne ladar measurements

    NASA Astrophysics Data System (ADS)

    Swirski, Yoram; Wolowelsky, Karni; Adar, Renen; Figov, Zvi

    2004-11-01

    Automatic delineation of buildings is very attractive for both civilian and military applications. Such applications include general mapping, detection of unauthorized constructions, change detection, etc. For military applications, high demand exists for accurate building change updates, covering large areas, and over short time periods. We present two algorithms coupled together. The height image algorithm is a fast coarse algorithm operating on large areas. This algorithm is capable of defining blocks of buildings and regions of interest. The point-cloud algorithm is a fine, 3D-based, accurate algorithm for building delineation. Since buildings may be separated by alleys, whose width is similar or narrower than the LADAR resolution, the height image algorithm marks those crowded buildings as a single object. The point-cloud algorithm separates and accurately delineates individual building boundaries and building sub-sections utilizing roof shape analysis in 3D. Our focus is on the ability to cover large areas with accuracy and high rejection of non-building objects, like trees. We report a very good detection performance with only few misses and false alarms. It is believed that LADAR measurements, coupled with good segmentation algorithms, may replace older systems and methods that require considerable manual work for such applications.

  8. Ladar-based IED detection

    NASA Astrophysics Data System (ADS)

    Engström, Philip; Larsson, Hâkan; Letalick, Dietmar

    2014-05-01

    An improvised explosive device (IED) is a bomb constructed and deployed in a non-standard manor. Improvised means that the bomb maker took whatever he could get his hands on, making it very hard to predict and detect. Nevertheless, the matters in which the IED's are deployed and used, for example as roadside bombs, follow certain patterns. One possible approach for early warning is to record the surroundings when it is safe and use this as reference data for change detection. In this paper a LADAR-based system for IED detection is presented. The idea is to measure the area in front of the vehicle when driving and comparing this to the previously recorded reference data. By detecting new, missing or changed objects the system can make the driver aware of probable threats.

  9. Phase gradient algorithm method for three-dimensional holographic ladar imaging.

    PubMed

    Stafford, Jason W; Duncan, Bradley D; Rabb, David J

    2016-06-10

    Three-dimensional (3D) holographic ladar uses digital holography with frequency diversity to add the ability to resolve targets in range. A key challenge is that since individual frequency samples are not recorded simultaneously, differential phase aberrations may exist between them, making it difficult to achieve range compression. We describe steps specific to this modality so that phase gradient algorithms (PGA) can be applied to 3D holographic ladar data for phase corrections across multiple temporal frequency samples. Substantial improvement of range compression is demonstrated with a laboratory experiment where our modified PGA technique is applied. Additionally, the PGA estimator is demonstrated to be efficient for this application, and the maximum entropy saturation behavior of the estimator is analytically described. PMID:27409018

  10. Real-time range generation for ladar hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Coker, Charles F.

    1996-05-01

    Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.

  11. LADAR scene projector for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Cornell, Michael C.; Naumann, Charles B.; Stockbridge, Robert G.; Snyder, Donald R.

    2002-07-01

    Future types of direct detection LADAR seekers will employ focal plane arrays in their receivers. Existing LADAR scene projection technology cannot meet the needs of testing these types of seekers in a Hardware-in-the-Loop environment. It is desired that the simulated LADAR return signals generated by the projection hardware be representative of the complex targets and background of a real LADAR image. A LADAR scene projector has been developed that is capable of meeting these demanding test needs. It can project scenes of simulated 2D LADAR return signals without scanning. In addition, each pixel in the projection can be represented by a 'complex' optical waveform, which can be delivered with sub-nanosecond precision. Finally, the modular nature of the projector allows it to be configured to operate at different wavelengths. This paper describes the LADAR Scene Projector and its full capabilities.

  12. A low-power CMOS trans-impedance amplifier for FM/cw ladar imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Kai; Zhao, Yi-qiang; Sheng, Yun; Zhao, Hong-liang; Yu, Hai-xia

    2013-09-01

    A scannerless ladar imaging system based on a unique frequency modulation/continuous wave (FM/cw) technique is able to entirely capture the target environment, using a focal plane array to construct a 3D picture of the target. This paper presents a low power trans-impedance amplifier (TIA) designed and implemented by 0.18 μm CMOS technology, which is used in the FM/cw imaging ladar with a 64×64 metal-semiconductor-metal(MSM) self-mixing detector array. The input stage of the operational amplifier (op amp) in TIA is realized with folded cascade structure to achieve large open loop gain and low offset. The simulation and test results of TIA with MSM detectors indicate that the single-end trans-impedance gain is beyond 100 kΩ, and the -3 dB bandwidth of Op Amp is beyond 60 MHz. The input common mode voltage ranges from 0.2 V to 1.5 V, and the power dissipation is reduced to 1.8 mW with a supply voltage of 3.3 V. The performance test results show that the TIA is a candidate for preamplifier of the read-out integrated circuit (ROIC) in the FM/cw scannerless ladar imaging system.

  13. Multi-dimensional, non-contact metrology using trilateration and high resolution FMCW ladar.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multidimensional, non-contact-length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration-based design provides 3D resolution inherently independent of standoff range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of <200 microns was determined to be limited by laser speckle issues caused by diffuse scattering of the targets. PMID:26193132

  14. Implementing torsional-mode Doppler ladar.

    PubMed

    Fluckiger, David U

    2002-08-20

    Laguerre-Gaussian laser modes carry orbital angular momentum as a consequence of their helical-phase front screw dislocation. This torsional beam structure interacts with rotating targets, changing the orbital angular momentum (azimuthal Doppler) of the scattered beam because angular momentum is a conserved quantity. I show how to measure this change independently from the usual longitudinal momentum (normal Doppler shift) and derive the apropos coherent mixing efficiencies for monostatic, truncated Laguerre and Gaussian-mode ladar antenna patterns. PMID:12206220

  15. New High-Resolution 3D Imagery of Fault Deformation and Segmentation of the San Onofre and San Mateo Trends in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Driscoll, N. W.; Kent, G. M.; Bormann, J. M.; Harding, A. J.

    2015-12-01

    The Inner California Borderlands (ICB) is situated off the coast of southern California and northern Baja. The structural and geomorphic characteristics of the area record a middle Oligocene transition from subduction to microplate capture along the California coast. Marine stratigraphic evidence shows large-scale extension and rotation overprinted by modern strike-slip deformation. Geodetic and geologic observations indicate that approximately 6-8 mm/yr of Pacific-North American relative plate motion is accommodated by offshore strike-slip faulting in the ICB. The farthest inshore fault system, the Newport-Inglewood Rose Canyon (NIRC) fault complex is a dextral strike-slip system that extends primarily offshore approximately 120 km from San Diego to the San Joaquin Hills near Newport Beach, California. Based on trenching and well data, the NIRC fault system Holocene slip rate is 1.5-2.0 mm/yr to the south and 0.5-1.0 mm/yr along its northern extent. An earthquake rupturing the entire length of the system could produce an Mw 7.0 earthquake or larger. West of the main segments of the NIRC fault complex are the San Mateo and San Onofre fault trends along the continental slope. Previous work concluded that these were part of a strike-slip system that eventually merged with the NIRC complex. Others have interpreted these trends as deformation associated with the Oceanside Blind Thrust fault purported to underlie most of the region. In late 2013, we acquired the first high-resolution 3D P-Cable seismic surveys (3.125 m bin resolution) of the San Mateo and San Onofre trends as part of the Southern California Regional Fault Mapping project aboard the R/V New Horizon. Analysis of these volumes provides important new insights and constraints on the fault segmentation and transfer of deformation. Based on the new 3D sparker seismic data, our preferred interpretation for the San Mateo and San Onofre fault trends is they are transpressional features associated with westward

  16. Ladar-based terrain cover classification

    NASA Astrophysics Data System (ADS)

    Macedo, Jose; Manduchi, Roberto; Matthies, Larry H.

    2001-09-01

    An autonomous vehicle driving in a densely vegetated environment needs to be able to discriminate between obstacles (such as rocks) and penetrable vegetation (such as tall grass). We propose a technique for terrain cover classification based on the statistical analysis of the range data produced by a single-axis laser rangefinder (ladar). We first present theoretical models for the range distribution in the presence of homogeneously distributed grass and of obstacles partially occluded by grass. We then validate our results with real-world cases, and propose a simple algorithm to robustly discriminate between vegetation and obstacles based on the local statistical analysis of the range data.

  17. Foliage discrimination using a rotating ladar

    NASA Technical Reports Server (NTRS)

    Castano, A.; Matthies, L.

    2003-01-01

    We present a real time algorithm that detects foliage using range from a rotating laser. Objects not classified as foliage are conservatively labeled as non-driving obstacles. In contrast to related work that uses range statistics to classify objects, we exploit the expected localities and continuities of an obstacle, in both space and time. Also, instead of attempting to find a single accurate discriminating factor for every ladar return, we hypothesize the class of some few returns and then spread the confidence (and classification) to other returns using the locality constraints. The Urbie robot is presently using this algorithm to descriminate drivable grass from obstacles during outdoor autonomous navigation tasks.

  18. AMCOM RDEC ladar HWIL simulation system development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Mobley, Scottie B.; Buford, James A., Jr.

    2003-09-01

    Hardware-in-the-loop (HWIL) testing has, for many years, been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Command"s (AMCOM) Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMCOM"s history includes the development, characterization, and implementation of several unique technologies for the creation of synthetic environments in the visible, infrared, and radio frequency spectral regions and AMCOM has continued significant efforts in these areas. This paper describes recent advancements at AMCOM"s Advanced Simulation Center (ASC) and concentrates on Ladar HWIL simulation system development.

  19. New High-Resolution 3D Seismic Imagery of Deformation and Fault Architecture Along Newport-Inglewood/Rose Canyon Fault in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Bormann, J. M.; Driscoll, N. W.; Kent, G.; Harding, A. J.; Wesnousky, S. G.

    2014-12-01

    The tectonic deformation and geomorphology of the Inner California Borderlands (ICB) records the transition from a convergent plate margin to a predominantly dextral strike-slip system. Geodetic measurements of plate boundary deformation onshore indicate that approximately 15%, or 6-8 mm/yr, of the total Pacific-North American relative plate motion is accommodated by faults offshore. The largest near-shore fault system, the Newport-Inglewood/Rose Canyon (NI/RC) fault complex, has a Holocene slip rate estimate of 1.5-2.0 mm/yr, according to onshore trenching, and current models suggest the potential to produce an Mw 7.0+ earthquake. The fault zone extends approximately 120 km, initiating from the south near downtown San Diego and striking northwards with a constraining bend north of Mt. Soledad in La Jolla and continuing northwestward along the continental shelf, eventually stepping onshore at Newport Beach, California. In late 2013, we completed the first high-resolution 3D seismic survey (3.125 m bins) of the NI/RC fault offshore of San Onofre as part of the Southern California Regional Fault Mapping project. We present new constraints on fault geometry and segmentation of the fault system that may play a role in limiting the extent of future earthquake ruptures. In addition, slip rate estimates using piercing points such as offset channels will be explored. These new observations will allow us to investigate recent deformation and strain transfer along the NI/RC fault system.

  20. Measurement of liquid level using ladar

    NASA Astrophysics Data System (ADS)

    Qi, Bing; Peng, Wei; Lin, Junxiu; Ding, Jianhua

    1996-09-01

    In this paper, a new method of liquid level measurement using discrete frequency IMCW ladar and optical fiber is described. The distance measurement is effectively made using the absolute technique of ladar in which the phase of an amplitude modulated light wave reflected from the liquid level is compared with that of the original modulation signal. To compensate the phase drift due to the change of delay time in electric circuit, a symmetry optical fiber is used as a reference path. The signal path and the reference path are measured in turn, and the difference between two paths is in proportion to the distance from the sensor head to the surface of the liquid. The optical unit is installed at a fixed reference point above the surface of liquid, and it connects with the electric unit by optical fibers. The main attributes of this system are that it neither requires electrical supplies or produces electrical signals in situ. It can be used in the oil industry due to the intrinsic safety. According to the test results, the accuracy of this system is better than 0.2%.

  1. LADAR vision technology for automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Frey, Randy W.

    1991-01-01

    LADAR Vision Technology at Autonomous Technologies Corporation consists of two sensor/processing technology elements: high performance long range multifunction coherent Doppler laser radar (LADAR) technology; and short range integrated CCD camera with direct detection laser ranging sensors. Algorithms and specific signal processing implementations have been simulated for both sensor/processing approaches to position and attitude tracking applicable to AR&C. Experimental data supporting certain sensor measurement accuracies have been generated.

  2. Multiple-input multiple-output 3D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Liu, Chunbo; Wu, Chao; Han, Xiang'e.

    2015-10-01

    A 3D (angle-angle-range) imaging laser radar (LADAR) based on multiple-input multiple-output structure is proposed. In the LADAR, multiple coherent beams are randomly phased to form the structured light field and an APD array detector is utilized to receive the echoes from target. The sampled signals from each element of APD are correlated with the referenced light to reconstruct the local 3D images of target. The 3D panorama of target can be obtained by stitching the local images of all the elements. The system composition is described first, then the operation principle is presented and numerical simulations are provided to show the validity of the proposed scheme.

  3. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  4. AMRDEC's HWIL synthetic environment development efforts for LADAR sensors

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2004-08-01

    Hardware-in-the-loop (HWIL) testing has been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMRDEC's history includes the development and implementation of several unique technologies for producing synthetic environments in the visible, infrared, MMW and RF regions. With the emerging sensor/electronics technology, LADAR sensors are becoming more viable option as an integral part of weapon systems, and AMRDEC has been expending efforts to develop the capabilities for testing LADAR sensors in a HWIL environment. There are several areas of challenges in LADAR HWIL testing, since the simulation requirements for the electronics and computation are stressing combinations of the passive image and active sensor HWIL testing. There have been several key areas where advancements have been made to address the challenges in developing a synthetic environment for the LADAR sensor testing. In this paper, we will present the latest results from the LADAR projector development and test efforts at AMRDEC's Advanced Simulation Center (ASC).

  5. Monostatic all-fiber scanning LADAR system.

    PubMed

    Leach, Jeffrey H; Chinn, Stephen R; Goldberg, Lew

    2015-11-20

    A compact scanning LADAR system based on a fiber-coupled, monostatic configuration which transmits (TX) and receives (RX) through the same aperture has been developed. A small piezo-electric stripe actuator was used to resonantly vibrate a fiber cantilever tip and scan the transmitted near-single-mode optical beam and the cladding mode receiving aperture. When compared to conventional bi-static systems with polygon, galvo, or Risley-prism beam scanners, the described system offers several advantages: the inherent alignment of the receiver field-of-view (FOV) relative to the TX beam angle, small size and weight, and power efficiency. Optical alignment of the system was maintained at all ranges since there is no parallax between the TX beam and the receiver FOV. A position-sensing detector (PSD) was used to sense the instantaneous fiber tip position. The Si PSD operated in a two-photon absorption mode to detect the transmitted 1.5 μm pulses. The prototype system collected 50,000 points per second with a 6° full scan angle and a 27 mm clear aperture/40 mm focal length TX/RX lens, had a range precision of 4.7 mm, and was operated at a maximum range of 26 m. PMID:26836533

  6. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  7. Remote sensing solution using 3-D flash LADAR for automated control of aircraft

    NASA Astrophysics Data System (ADS)

    Neff, Brian J.; Fuka, Jennifer A.; Burwell, Alan C.; Gray, Stephen W.; Hubbard, Mason J.; Schenkel, Joseph W.

    2015-09-01

    The majority of image quality studies in the field of remote sensing have been performed on systems with conventional aperture functions. These systems have well-understood image quality tradeoffs, characterized by the General Image Quality Equation (GIQE). Advanced, next-generation imaging systems present challenges to both post-processing and image quality prediction. Examples include sparse apertures, synthetic apertures, coded apertures and phase elements. As a result of the non-conventional point spread functions of these systems, post-processing becomes a critical step in the imaging process and artifacts arise that are more complicated than simple edge overshoot. Previous research at the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory has resulted in a modeling methodology for sparse and segmented aperture systems, the validation of which will be the focus of this work. This methodology has predicted some unique post-processing artifacts that arise when a sparse aperture system with wavefront error is used over a large (panchromatic) spectral bandpass. Since these artifacts are unique to sparse aperture systems, they have not yet been observed in any real-world data. In this work, a laboratory setup and initial results for a model validation study will be described. Initial results will focus on the validation of spatial frequency response predictions and verification of post-processing artifacts. The goal of this study is to validate the artifact and spatial frequency response predictions of this model. This will allow model predictions to be used in image quality studies, such as aperture design optimization, and the signal-to-noise vs. post-processing artifact tradeoff resulting from choosing a panchromatic vs. multispectral system.

  8. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  9. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  10. Anomaly detection in clutter using spectrally enhanced LADAR

    NASA Astrophysics Data System (ADS)

    Chhabra, Puneet S.; Wallace, Andrew M.; Hopgood, James R.

    2015-05-01

    Discrete return (DR) Laser Detection and Ranging (Ladar) systems provide a series of echoes that reflect from objects in a scene. These can be first, last or multi-echo returns. In contrast, Full-Waveform (FW)-Ladar systems measure the intensity of light reflected from objects continuously over a period of time. In a camflouaged scenario, e.g., objects hidden behind dense foliage, a FW-Ladar penetrates such foliage and returns a sequence of echoes including buried faint echoes. The aim of this paper is to learn local-patterns of co-occurring echoes characterised by their measured spectra. A deviation from such patterns defines an abnormal event in a forest/tree depth profile. As far as the authors know, neither DR or FW-Ladar, along with several spectral measurements, has not been applied to anomaly detection. This work presents an algorithm that allows detection of spectral and temporal anomalies in FW-Multi Spectral Ladar (FW-MSL) data samples. An anomaly is defined as a full waveform temporal and spectral signature that does not conform to a prior expectation, represented using a learnt subspace (dictionary) and set of coefficients that capture co-occurring local-patterns using an overlapping temporal window. A modified optimization scheme is proposed for subspace learning based on stochastic approximations. The objective function is augmented with a discriminative term that represents the subspace's separability properties and supports anomaly characterisation. The algorithm detects several man-made objects and anomalous spectra hidden in a dense clutter of vegetation and also allows tree species classification.

  11. Interactive photogrammetric system for mapping 3D objects

    NASA Astrophysics Data System (ADS)

    Knopp, Dave E.

    1990-08-01

    A new system, FOTO-G, has been developed for 3D photogrammetric applications. It is a production-oriented software system designed to work with highly unconventional photogrammetric image configurations which result when photographing 3D objects. A demonstration with imagery from an actual 3D-mapping project is reported.

  12. A Λ-type soft-aperture LADAR SNR improvement with quantum-enhanced receiver

    NASA Astrophysics Data System (ADS)

    Yang, Song; Ruan, Ningjuan; Lin, Xuling; Wu, Zhiqiang

    2015-08-01

    A quantum-enhanced receiver that uses squeezed vacuum injection (SVI) and phase sensitive amplification (PSA) is in principle capable of obtaining effective signal to noise ratio (SNR) improvement in a soft-aperture homodyne-detection LAser Detection And Ranging (LADAR) system over the classical homodyne LADAR to image a far-away target. Here we investigate the performance of quantum-enhanced receiver in Λ-type soft aperture LADAR for target imaging. We also use fast Fourier transform (FFT) Algorithm to simulate LADAR intensity image, and give a comparison of the SNR improvement of soft aperture case and hard aperture case.

  13. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  14. Optical design of a synthetic aperture ladar antenna system

    NASA Astrophysics Data System (ADS)

    Cao, Changqing; Zeng, Xiaodong; Zhao, Xiaoyan; Liu, Huanhuan; Man, Xiangkun

    2008-03-01

    The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. According to the demands of the Synthetic Aperture LADAR (SAL), the key techniques are analyzed briefly. The preliminary design of the optical antenna is also introduced in this paper. We investigate the design method and relevant problems of efficient optical antenna that are required in SAL. The design is pursued on the basis of the same method as is used at microwave frequency. The method is based on numerical analysis and the error values obtained by present manufacturing technology. According to the requirement to SAL with the trial of little size, light mass, low cost and high image quality, the result by ZEMAX will result.

  15. Ladar System Identifies Obstacles Partly Hidden by Grass

    NASA Technical Reports Server (NTRS)

    Castano, Andres

    2003-01-01

    A ladar-based system now undergoing development is intended to enable an autonomous mobile robot in an outdoor environment to avoid moving toward trees, large rocks, and other obstacles that are partly hidden by tall grass. The design of the system incorporates the assumption that the robot is capable of moving through grass and provides for discrimination between grass and obstacles on the basis of geometric properties extracted from ladar readings as described below. The system (see figure) includes a ladar system that projects a range-measuring pulsed laser beam that has a small angular width of radians and is capable of measuring distances of reflective objects from a minimum of dmin to a maximum of dmax. The system is equipped with a rotating mirror that scans the beam through a relatively wide angular range of in a horizontal plane at a suitable small height above the ground. Successive scans are performed at time intervals of seconds. During each scan, the laser beam is fired at relatively small angular intervals of radians to make range measurements, so that the total number of range measurements acquired in a scan is Ne = / .

  16. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  17. A 3-D Look at Post-Tropical Cyclone Hermine

    NASA Video Gallery

    This 3-D flyby animation of GPM imagery shows Post-Tropical Storm Hermine on Sept. 6. Rain was falling at a rate of over 1.1 inches (27 mm) per hour between the Atlantic coast and Hermine's center ...

  18. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  19. Sensor based 3D conformal cueing for safe and reliable HC operation specifically for landing in DVE

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Kress, Martin; Klasen, Stephanus

    2013-05-01

    The paper describes the approach of a sensor based landing aid for helicopters in degraded visual conditions. The system concept presented employs a long range high resolution ladar sensor allowing for identifying obstacles in the flight and in the approach path as well as measuring landing site conditions like slope, roughness and precise position relative to the helicopter during long final approach. All these measurements are visualized to the pilot. Cueing is done by 3D conformal symbology displayed in a head-tracked HMD enhanced by 2D symbols for data which is perceived easier by 2D symbols than by 3D cueing. All 3D conformal symbology is placed on the measured landing site surface which is further visualized by a grid structure for displaying landing site slope, roughness and small obstacles. Due to the limited resolution of the employed HMD a specific scheme of blending in the information during the approach is employed. The interplay between in flight and in approach obstacle warning and CFIT warning symbology with this landing aid symbology is also investigated and exemplarily evaluated for the NH90 helicopter which has already today implemented a long range high resolution ladar sensor based obstacle warning and CFIT symbology. The paper further describes the results of simulator and flight tests performed with this system employing a ladar sensor and a head-tracked head mounted display system. In the simulator trials a full model of the ladar sensor producing 3D measurement points was used working with the same algorithms used in flight tests.

  20. Ladar scene projector for a hardware-in-the-loop simulation system.

    PubMed

    Xu, Rui; Wang, Xin; Tian, Yi; Li, Zhuo

    2016-07-20

    In order to test a direct-detection ladar in a hardware-in-the-loop simulation system, a ladar scene projector is proposed. A model based on the ladar range equation is developed to calculate the profile of the ladar return signal. The influences of both the atmosphere and the target's surface properties are considered. The insertion delays of different channels of the ladar scene projector are investigated and compensated for. A target range image with 108 pixels is generated. The simulation range is from 0 to 15 km, the range resolution is 1.04 m, the range error is 1.28 cm, and the peak-valley error for different channels is 15 cm. PMID:27463932

  1. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  2. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. Range resolution improvement of eyesafe ladar testbed (ELT) measurements using sparse signal deconvolution

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Gunther, Jacob H.

    2014-06-01

    The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.

  5. Self-mixing detector candidates for an FM/cw ladar architecture

    NASA Astrophysics Data System (ADS)

    Ruff, William C.; Bruno, John D.; Kennerly, Stephen W.; Ritter, Ken; Shen, Paul H.; Stann, Barry L.; Stead, Michael R.; Sztankay, Zoltan G.; Tobin, Mary S.

    2000-09-01

    The U.S. Army Research Laboratory (ARL) is currently investigating unique self-mixing detectors for ladar systems. These detectors have the ability to internally detect and down-convert light signals that are amplitude modulated at ultra-high frequencies (UHF). ARL is also investigating a ladar architecture based on FM/cw radar principles, whereby the range information is contained in the low-frequency mixing product derived by mixing a reference UHF chirp with a detected, time-delayed UHF chirp. When inserted into the ARL FM/cw ladar architecture, the self-mixing detector eliminates the need for wide band transimpedance amplifiers in the ladar receiver because the UHF mixing is done internal to the detector, thereby reducing both the cost and complexity of the system and enhancing its range capability. This fits well with ARL's goal of developing low-cost, high-speed line array ladars for submunition applications and extremely low-cost, single pixel ladars for ranging applications. Several candidate detectors have been investigated for this application, with metal-semiconductor-metal (MSM) detectors showing the most promise. This paper discusses the requirements for a self-mixing detector, characterization measurements from several candidate detectors and experimental results from their insertion in a laboratory FM/cw ladar.

  6. Status report on next-generation LADAR for driving unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Juberts, Maris; Barbera, Anthony J.

    2004-12-01

    The U.S. Department of Defense has initiated plans for the deployment of autonomous robotic vehicles in various tactical military operations starting in about seven years. Most of these missions will require the vehicles to drive autonomously over open terrain and on roads which may contain traffic, obstacles, military personnel as well as pedestrians. Unmanned Ground Vehicles (UGVs) must therefore be able to detect, recognize and track objects and terrain features in very cluttered environments. Although several LADAR sensors exist today which have successfully been implemented and demonstrated to provide somewhat reliable obstacle detection and can be used for path planning and selection, they tend to be limited in performance, are effected by obscurants, and are quite large and expensive. In addition, even though considerable effort and funding has been provided by the DOD R&D community, nearly all of the development has been for target detection (ATR) and tracking from various flying platforms. Participation in the Army and DARPA sponsored UGV programs has helped NIST to identify requirement specifications for LADAR to be used for on and off-road autonomous driving. This paper describes the expected requirements for a next generation LADAR for driving UGVs and presents an overview of proposed LADAR design concepts and a status report on current developments in scannerless Focal Plane Array (FPA) LADAR and advanced scanning LADAR which may be able to achieve the stated requirements. Examples of real-time range images taken with existing LADAR prototypes will be presented.

  7. Use of 3D laser radar for navigation of unmanned aerial and ground vehicles in urban and indoor environments

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Don; Smearcheck, Mark

    2007-04-01

    This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D) imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable solution an alternative method is being discussed in this paper. The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame rate. The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant environmental features such as planar features (walls), line features or point features (corners) are extracted and associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or direction vectors are used to compute the platform position and attitude

  8. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  9. Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements.

    PubMed

    Baumann, Esther; Giorgetta, Fabrizio R; Coddington, Ian; Sinclair, Laura C; Knabe, Kevin; Swann, William C; Newbury, Nathan R

    2013-06-15

    We demonstrate a comb-calibrated frequency-modulated continuous-wave laser detection and ranging (FMCW ladar) system for absolute distance measurements. The FMCW ladar uses a compact external cavity laser that is swept quasi-sinusoidally over 1 THz at a 1 kHz rate. The system simultaneously records the heterodyne FMCW ladar signal and the instantaneous laser frequency at sweep rates up to 3400 THz/s, as measured against a free-running frequency comb (femtosecond fiber laser). Demodulation of the ladar signal against the instantaneous laser frequency yields the range to the target with 1 ms update rates, bandwidth-limited 130 μm resolution and a ~100 nm accuracy that is directly linked to the counted repetition rate of the comb. The precision is <100 nm at the 1 ms update rate and reaches ~6 nm for a 100 ms average. PMID:23938965

  10. Ground vehicle based ladar for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Close, Ryan

    2015-05-01

    In recent years, the number of commercially available LADAR (also referred to as LIDAR) systems have grown with the increased interest in ground vehicle robotics and aided navigation/collision avoidance in various industries. With this increased demand the cost of these systems has dropped and their capabilities have increased. As a result of this trend, LADAR systems are becoming a cost effective sensor to use in a number of applications of interest to the US Army. One such application is the standoff detection of road-side hazards from ground vehicles. This paper will discuss detection of road-side hazards partially concealed by light to medium vegetation. Current algorithms using commercially available LADAR systems for detecting these targets will be presented, along with results from relevant data sets. Additionally, optimization of commercial LADAR sensors and/or fusion with Radar will be discussed as ways of increasing detection ability.

  11. Context-driven automated target detection in 3D data

    NASA Astrophysics Data System (ADS)

    West, Karen F.; Webb, Brian N.; Lersch, James R.; Pothier, Steven; Triscari, Joseph M.; Iverson, A. E.

    2004-09-01

    This paper summarizes a system, and its component algorithms, for context-driven target vehicle detection in 3-D data that was developed under the Defense Advanced Research Projects Agency (DARPA) Exploitation of 3-D Data (E3D) Program. In order to determine the power of shape and geometry for the extraction of context objects and the detection of targets, our algorithm research and development concentrated on the geometric aspects of the problem and did not utilize intensity information. Processing begins with extraction of context information and initial target detection at reduced resolution, followed by a detailed, full-resolution analysis of candidate targets. Our reduced-resolution processing includes a probabilistic procedure for finding the ground that is effective even in rough terrain; a hierarchical, graph-based approach for the extraction of context objects and potential vehicle hide sites; and a target detection process that is driven by context-object and hide-site locations. Full-resolution processing includes statistical false alarm reduction and decoy mitigation. When results are available from previously collected data, we also perform object-level change detection, which affects the probabilities that objects are context objects or targets. Results are presented for both synthetic and collected LADAR data.

  12. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  13. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  14. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  15. Asymptotic modeling of synthetic aperture ladar sensor phenomenology

    NASA Astrophysics Data System (ADS)

    Neuroth, Robert M.; Rigling, Brian D.; Zelnio, Edmund G.; Watson, Edward A.; Velten, Vincent J.; Rovito, Todd V.

    2015-05-01

    Interest in the use of active electro-optical(EO) sensors for non-cooperative target identification has steadily increased as the quality and availability of EO sources and detectors have improved. A unique and recent innovation has been the development of an airborne synthetic aperture imaging capability at optical wavelengths. To effectively exploit this new data source for target identification, one must develop an understanding of target-sensor phenomenology at those wavelengths. Current high-frequency, asymptotic EM predictors are computationally intractable for such conditions, as their ray density is inversely proportional to wavelength. As a more efficient alternative, we have developed a geometric optics based simulation for synthetic aperture ladar that seeks to model the second order statistics of the diffuse scattering commonly found at those wavelengths but with much lesser ray density. Code has been developed, ported to high-performance computing environments, and tested on a variety of target models.

  16. High accuracy LADAR scene projector calibration sensor development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.

    2008-04-01

    A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.

  17. Time reversed photonic beamforming of arbitrary waveform ladar arrays

    NASA Astrophysics Data System (ADS)

    Cox, Joseph L.; Zmuda, Henry; Bussjaeger, Rebecca J.; Erdmann, Reinhard K.; Fanto, Michael L.; Hayduk, Michael J.; Malowicki, John E.

    2007-04-01

    Herein is described a novel approach of performing adaptive photonic beam forming of an array of optical fibers with the expressed purpose of performing laser ranging. The beam forming technique leverages the concepts of time reversal, previously implemented in the sonar community, and wherein photonic implementation has recently been described for use by beamforming of ultra-wideband radar arrays. Photonic beam forming is also capable of combining the optical output of several fiber lasers into a coherent source, exactly phase matched on a pre-determined target. By implementing electro-optically modulated pulses from frequency chirped femtosecond-scale laser pulses, ladar waveforms can be generated with arbitrary spectral and temporal characteristics within the limitations of the wide-band system. Also described is a means of generating angle/angle/range measurements of illuminated targets.

  18. Concepts using optical MEMS array for ladar scene projection

    NASA Astrophysics Data System (ADS)

    Smith, J. Lynn

    2003-09-01

    Scene projection for HITL testing of LADAR seekers is unique because the 3rd dimension is time delay. Advancement in AFRL for electronic delay and pulse shaping circuits, VCSEL emitters, fiber optic and associated scene generation is underway, and technology hand-off to test facilities is expected eventually. However, size and cost currently projected behooves cost mitigation through further innovation in system design, incorporating new developments, cooperation, and leveraging of dual-purpose technology. Therefore a concept is offered which greatly reduces the number (thus cost) of pulse shaping circuits and enables the projector to be installed on the mobile arm of a flight motion simulator table without fiber optic cables. The concept calls for an optical MEMS (micro-electromechanical system) steerable micro-mirror array. IFOV"s are a cluster of four micro-mirrors, each of which steers through a unique angle to a selected light source with the appropriate delay and waveform basis. An array of such sources promotes angle-to-delay mapping. Separate pulse waveform basis circuits for each scene IFOV are not required because a single set of basis functions is broadcast to all MEMS elements simultaneously. Waveform delivery to spatial filtering and collimation optics is addressed by angular selection at the MEMS array. Emphasis is on technology in existence or under development by the government, its contractors and the telecommunications industry. Values for components are first assumed as those that are easily available. Concept adequacy and upgrades are then discussed. In conclusion an opto-mechanical scan option ranks as the best light source for near-term MEMS-based projector testing of both flash and scan LADAR seekers.

  19. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  20. Amazing Space: Explanations, Investigations, & 3D Visualizations

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  1. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  2. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  3. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  4. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  5. Low-cost compact MEMS scanning ladar system for robotic applications

    NASA Astrophysics Data System (ADS)

    Moss, Robert; Yuan, Ping; Bai, Xiaogang; Quesada, Emilio; Sudharsanan, Rengarajan; Stann, Barry L.; Dammann, John F.; Giza, Mark M.; Lawler, William B.

    2012-06-01

    Future robots and autonomous vehicles require compact low-cost Laser Detection and Ranging (LADAR) systems for autonomous navigation. Army Research Laboratory (ARL) had recently demonstrated a brass-board short-range eye-safe MEMS scanning LADAR system for robotic applications. Boeing Spectrolab is doing a tech-transfer (CRADA) of this system and has built a compact MEMS scanning LADAR system with additional improvements in receiver sensitivity, laser system, and data processing system. Improved system sensitivity, low-cost, miniaturization, and low power consumption are the main goals for the commercialization of this LADAR system. The receiver sensitivity has been improved by 2x using large-area InGaAs PIN detectors with low-noise amplifiers. The FPGA code has been updated to extend the range to 50 meters and detect up to 3 targets per pixel. Range accuracy has been improved through the implementation of an optical T-Zero input line. A compact commercially available erbium fiber laser operating at 1550 nm wavelength is used as a transmitter, thus reducing the size of the LADAR system considerably from the ARL brassboard system. The computer interface has been consolidated to allow image data and configuration data (configuration settings and system status) to pass through a single Ethernet port. In this presentation we will discuss the system architecture and future improvements to receiver sensitivity using avalanche photodiodes.

  6. Ladar scene generation techniques for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Coker, Jason S.; Coker, Charles F.; Bergin, Thomas P.

    1999-07-01

    LADAR (Laser Detection and Ranging) as its name implies uses laser-ranging technology to provide information regarding target and/or background signatures. When fielded in systems, LADAR can provide ranging information to on board algorithms that in turn may utilize the information to analyze target type and range. Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop (HWIL) facility can be used to provide a nondestructive testing environment to evaluate a system's capability and therefore reduce program risk and cost. However, in LADAR systems many factors can influence the quality of the data obtained, and thus have a significant impact on algorithm performance. It is important therefore to take these factors into consideration when attempting to simulate LADAR data for Digital or HWIL testing. Some of the factors that will be considered in this paper include items such as weak or noisy detectors, multi-return, and weapon body dynamics. Various computer techniques that may be employed to simulate these factors will be analyzed to determine their merit in use for real-time simulations.

  7. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    -of-a-kind imagery assets and skill sets, such as ground-based fixed and tracking cameras, crew-in the-loop imaging applications, and the integration of custom or commercial-off-the-shelf sensors onboard spacecraft. For spaceflight applications, the Integration 2 Team leverages modeling, analytical, and scientific resources along with decades of experience and lessons learned to assist the customer in optimizing engineering imagery acquisition and management schemes for any phase of flight - launch, ascent, on-orbit, descent, and landing. The Integration 2 Team guides the customer in using NASA's world-class imagery analysis teams, which specialize in overcoming inherent challenges associated with spaceflight imagery sets. Precision motion tracking, two-dimensional (2D) and three-dimensional (3D) photogrammetry, image stabilization, 3D modeling of imagery data, lighting assessment, and vehicle fiducial marking assessments are available. During a mission or test, the Integration 2 Team provides oversight of imagery operations to verify fulfillment of imagery requirements. The team oversees the collection, screening, and analysis of imagery to build a set of imagery findings. It integrates and corroborates the imagery findings with other mission data sets, generating executive summaries to support time-critical mission decisions.

  8. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  9. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  10. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  11. Ground target detection based on discrete cosine transform and Rényi entropy for imaging ladar

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Chen, Weili; Li, Junwei; Dong, Yanbing

    2016-01-01

    The discrete cosine transform (DCT) due to its excellent properties that the images can be represented in spatial/spatial-frequency domains, has been applied in sequence data analysis and image fusion. For intensity and range images of ladar, through the DCT using one dimension window, the statistical property of Rényi entropy for images is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on DCT and Rényi entropy is proposed. After that, ground target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.

  12. Thermal infrared exploitation for 3D face reconstruction

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.

    2009-05-01

    Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.

  13. Imaging signal-to-noise ratio of synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Liu, Liren

    2015-09-01

    On the basis of the Poisson photocurrent statistics in the photon-limited heterodyne detection, in this paper, the signal-to-noise ratios in the receiver in the time domain and on the focused 1-D image and 2-D image in the space domain are derived for both the down-looking and side-looking synthetic aperture imaging ladars using PIN or APD photodiodes. The major shot noises in the down-looking SAIL and the side-looking SAIL are, respectively, from the dark current of photodiode and the local beam current. It is found that the ratio of 1-D image SNR to receiver SNR is proportional to the number of resolution elements in the cross direction of travel and the ratio of 2-D image SNR to 1-D image SNR is proportional to the number of resolution elements in the travel direction. And the sensitivity, the effect of Fourier transform of sampled signal, and the influence of time response of detection circuit are discussed, too. The study will help to correctly design a SAIL system.

  14. The Enhanced-model Ladar Wind Sensor and Its Application in Planetary Wind Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Soreide, D. C.; Mcgann, R. L.; Erwin, L. L.; Morris, D. J.

    1993-01-01

    For several years we have been developing an optical air-speed sensor that has a clear application as a meteorological wind-speed sensor for the Mars landers. This sensor has been developed for aircraft use to replace the familiar, pressure-based Pitot probe. Our approach utilizes a new concept in the laser-based optical measurement of air velocity (the Enhanced-Mode Ladar), which allows us to make velocity measurements with significantly lower laser power than conventional methods. The application of the Enhanced-Mode Ladar to measuring wind speeds in the martian atmosphere is discussed.

  15. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  17. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  18. Visualization of 3D Geological Models on Google Earth

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  19. Simulation of 3D infrared scenes using random fields model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhang, Jianqi

    2001-09-01

    Analysis and simulation of smart munitions requires imagery for the munition's sensor to view. The traditional infrared background simulations are always limited in the plane scene studies. A new method is described to synthesize the images in 3D view and with various terrains texture. We develop the random fields model and temperature fields to simulate 3D infrared scenes. Generalized long-correlation (GLC) model, one of random field models, will generate both the 3D terrains skeleton data and the terrains texture in this work. To build the terrain mesh with the random fields, digital elevation models (DEM) are introduced in the paper. And texture mapping technology will perform the task of pasting the texture in the concavo-convex surfaces of the 3D scene. The simulation using random fields model is a very available method to produce 3D infrared scene with great randomicity and reality.

  20. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  1. Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Mohan; Blask, Steven; Higgins, Thomas; Clifton, William; Davidsohn, Daniel; Carson, Ryan; Reynolds, Van; Pfannenstiel, Joanne; Cannata, Richard; Marino, Richard; Drover, John; Hatch, Robert; Schue, David; Freehart, Robert; Rowe, Greg; Mooney, James; Hart, Carl; Stanley, Byron; McLaughlin, Joseph; Lee, Eui-In; Berenholtz, Jack; Aull, Brian; Zayhowski, John; Vasile, Alex; Ramaswami, Prem; Ingersoll, Kevin; Amoruso, Thomas; Khan, Imran; Davis, William; Heinrichs, Richard

    2007-04-01

    Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1 manned helicopter, which served as a surrogate platform for the purpose of data collection and system validation. In this paper, we discuss the results from the ground integration and testing of the system, and the results from UH-1 flight data collections. We also discuss the performance results of the system obtained using ladar calibration targets.

  2. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  3. Noise filtering techniques for photon-counting ladar data

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Wharton, Michael E., III; Stout, Kevin D.; Neuenschwander, Amy L.

    2012-06-01

    Many of the recent small, low power ladar systems provide detection sensitivities on the photon(s) level for altimetry applications. These "photon-counting" instruments, many times, are the operational solution to high altitude or space based platforms where low signal strength and size limitations must be accommodated. Despite the many existing algorithms for lidar data product generation, there remains a void in techniques available for handling the increased noise level in the photon-counting measurements as the larger analog systems do not exhibit such low SNR. Solar background noise poses a significant challenge to accurately extract surface features from the data. Thus, filtering is required prior to implementation of other post-processing efforts. This paper presents several methodologies for noise filtering photoncounting data. Techniques include modified Canny Edge Detection, PDF-based signal extraction, and localized statistical analysis. The Canny Edge detection identifies features in a rasterized data product using a Gaussian filter and gradient calculation to extract signal photons. PDF-based analysis matches local probability density functions with the aggregate, thereby extracting probable signal points. The localized statistical method assigns thresholding values based on a weighted local mean of angular variances. These approaches have demonstrated the ability to remove noise and subsequently provide accurate surface (ground/canopy) determination. The results presented here are based on analysis of multiple data sets acquired with the high altitude NASA MABEL system and photon-counting data supplied by Sigma Space Inc. configured to simulate the NASA upcoming ICESat-2 mission instrument expected data product.

  4. Integration and demonstration of MEMS-scanned LADAR for robotic navigation

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Del Giorno, Mark; DiBerardino, Charles; Giza, Mark M.; Powers, Michael A.; Uzunovic, Nenad

    2014-06-01

    LADAR is among the pre-eminent sensor modalities for autonomous vehicle navigation. Size, weight, power and cost constraints impose significant practical limitations on perception systems intended for small ground robots. In recent years, the Army Research Laboratory (ARL) developed a LADAR architecture based on a MEMS mirror scanner that fundamentally improves the trade-offs between these limitations and sensor capability. We describe how the characteristics of a highly developed prototype correspond to and satisfy the requirements of autonomous navigation and the experimental scenarios of the ARL Robotics Collaborative Technology Alliance (RCTA) program. In particular, the long maximum and short minimum range capability of the ARL MEMS LADAR makes it remarkably suitable for a wide variety of scenarios from building mapping to the manipulation of objects at close range, including dexterous manipulation with robotic arms. A prototype system was applied to a small (approximately 50 kg) unmanned robotic vehicle as the primary mobility perception sensor. We present the results of a field test where the perception information supplied by the LADAR system successfully accomplished the experimental objectives of an Integrated Research Assessment (IRA).

  5. Case study: The Avengers 3D: cinematic techniques and digitally created 3D

    NASA Astrophysics Data System (ADS)

    Clark, Graham D.

    2013-03-01

    Marvel's THE AVENGERS was the third film Stereo D collaborated on with Marvel; it was a summation of our artistic development of what Digitally Created 3D and Stereo D's artists and toolsets affords Marvel's filmmakers; the ability to shape stereographic space to support the film and story, in a way that balances human perception and live photography. We took our artistic lead from the cinematic intentions of Marvel, the Director Joss Whedon, and Director of Photography Seamus McGarvey. In the digital creation of a 3D film from a 2D image capture, recommendations to the filmmakers cinematic techniques are offered by Stereo D at each step from pre-production onwards, through set, into post. As the footage arrives at our facility we respond in depth to the cinematic qualities of the imagery in context of the edit and story, with the guidance of the Directors and Studio, creating stereoscopic imagery. Our involvement in The Avengers was early in production, after reading the script we had the opportunity and honor to meet and work with the Director Joss Whedon, and DP Seamus McGarvey on set, and into post. We presented what is obvious to such great filmmakers in the ways of cinematic techniques as they related to the standard depth cues and story points we would use to evaluate depth for their film. Our hope was any cinematic habits that supported better 3D would be emphasized. In searching for a 3D statement for the studio and filmmakers we arrived at a stereographic style that allowed for comfort and maximum visual engagement to the viewer.

  6. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  7. IFSAR processing for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2005-05-01

    In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.

  8. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  9. Rapid high-fidelity visualisation of multispectral 3D mapping

    NASA Astrophysics Data System (ADS)

    Tudor, Philip M.; Christy, Mark

    2011-06-01

    Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data fusion are identified as well as the central underlying mathematical transforms, data management and graphics processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets. Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.

  10. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  11. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  12. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  13. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  14. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  15. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  16. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  17. LLNL-Earth3D

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  18. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  19. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  20. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  1. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  2. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  5. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  6. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  7. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  8. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  9. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  10. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  11. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. Advances in ground vehicle-based LADAR for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Vessey, Alyssa; Close, Ryan; Middleton, Seth; Williams, Kathryn; Rupp, Ronald; Nguyen, Son

    2016-05-01

    Commercial sensor technology has the potential to bring cost-effective sensors to a number of U.S. Army applications. By using sensors built for a widespread of commercial application, such as the automotive market, the Army can decrease costs of future systems while increasing overall capabilities. Additional sensors operating in alternate and orthogonal modalities can also be leveraged to gain a broader spectrum measurement of the environment. Leveraging multiple phenomenologies can reduce false alarms and make detection algorithms more robust to varied concealment materials. In this paper, this approach is applied to the detection of roadside hazards partially concealed by light-to-medium vegetation. This paper will present advances in detection algorithms using a ground vehicle-based commercial LADAR system. The benefits of augmenting a LADAR with millimeter-wave automotive radar and results from relevant data sets are also discussed.

  14. Optical imaging process based on two-dimensional Fourier transform for synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Zhi, Ya'nan; Liu, Liren; Sun, Jianfeng; Zhou, Yu; Hou, Peipei

    2013-09-01

    The synthetic aperture imaging ladar (SAIL) systems typically generate large amounts of data difficult to compress with digital method. This paper presents an optical SAIL processor based on compensation of quadratic phase of echo in azimuth direction and two dimensional Fourier transform. The optical processor mainly consists of one phase-only liquid crystal spatial modulator(LCSLM) to load the phase data of target echo and one cylindrical lens to compensate the quadratic phase and one spherical lens to fulfill the task of two dimensional Fourier transform. We show the imaging processing result of practical target echo obtained by a synthetic aperture imaging ladar demonstrator. The optical processor is compact and lightweight and could provide inherent parallel and the speed-of-light computing capability, it has a promising application future especially in onboard and satellite borne SAIL systems.

  15. The laser linewidth effect on the image quality of phase coded synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren

    2015-12-01

    The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.

  16. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  17. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  18. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  19. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  20. Holography, tomography and 3D microscopy as linear filtering operations

    NASA Astrophysics Data System (ADS)

    Coupland, J. M.; Lobera, J.

    2008-07-01

    In this paper, we characterize 3D optical imaging techniques as 3D linear shift-invariant filtering operations. From the Helmholtz equation that is the basis of scalar diffraction theory, we show that the scattered field, or indeed a holographic reconstruction of this field, can be considered to be the result of a linear filtering operation applied to a source distribution. We note that if the scattering is weak, the source distribution is independent of the scattered field and a holographic reconstruction (or in fact any far-field optical imaging system) behaves as a 3D linear shift-invariant filter applied to the refractive index contrast (which effectively defines the object). We go on to consider tomographic techniques that synthesize images from recordings of the scattered field using different illumination conditions. In our analysis, we compare the 3D response of monochromatic optical tomography with the 3D imagery offered by confocal microscopy and scanning white light interferometry (using quasi-monochromatic illumination) and explain the circumstances under which these approaches are equivalent. Finally, we consider the 3D response of polychromatic optical tomography and in particular the response of spectral optical coherence tomography and scanning white light interferometry.

  1. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  2. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  3. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  4. SNL3dFace

    Energy Science and Technology Software Center (ESTSC)

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  5. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  6. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  7. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  8. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  9. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  10. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  11. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  12. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  13. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  14. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  15. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  16. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  17. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  18. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  19. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  20. Improvement of the signal-to-noise ratio in static-mode down-looking synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Lu, Zhiyong; Sun, Jianfeng; Zhang, Ning; Zhou, Yu; Cai, Guangyu; Liu, Liren

    2015-09-01

    The static-mode down-looking synthetic aperture imaging ladar (SAIL) can keep the target and carrying-platform still during the collection process. Improvement of the signal-to-noise ratio in static-mode down-looking SAIL is investigated. The signal-to-noise ratio is improved by increasing scanning time and sampling rate in static-mode down-looking SAIL. In the experiment, the targets are reconstructed in different scanning time and different sampling rate. As the increasing of the scanning time and sampling rate, the reconstructed images become clearer. These techniques have a great potential for applications in extensive synthetic aperture imaging ladar fields.

  1. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  2. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  3. GPU-Accelerated Denoising in 3D (GD3D)

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  4. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  5. LADAR performance simulations with a high spectral resolution atmospheric transmittance and radiance model: LEEDR

    NASA Astrophysics Data System (ADS)

    Roth, Benjamin D.; Fiorino, Steven T.

    2012-06-01

    In this study of atmospheric effects on Geiger Mode laser ranging and detection (LADAR), the parameter space is explored primarily using the Air Force Institute of Technology Center for Directed Energy's (AFIT/CDE) Laser Environmental Effects Definition and Reference (LEEDR) code. The expected performance of LADAR systems is assessed at operationally representative wavelengths of 1.064, 1.56 and 2.039 μm at a number of locations worldwide. Signal attenuation and background noise are characterized using LEEDR. These results are compared to standard atmosphere and Fast Atmospheric Signature Code (FASCODE) assessments. Scenarios evaluated are based on air-toground engagements including both down looking oblique and vertical geometries in which anticipated clear air aerosols are expected to occur. Engagement geometry variations are considered to determine optimum employment techniques to exploit or defeat the environmental conditions. Results, presented primarily in the form of worldwide plots of notional signal to noise ratios, show a significant climate dependence, but large variances between climatological and standard atmosphere assessments. An overall average absolute mean difference ratio of 1.03 is found when climatological signal-to-noise ratios at 40 locations are compared to their equivalent standard atmosphere assessment. Atmospheric transmission is shown to not always correlate with signal-to-noise ratios between different atmosphere profiles. Allowing aerosols to swell with relative humidity proves to be significant especially for up looking geometries reducing the signal-to-noise ratio several orders of magnitude. Turbulence blurring effects that impact tracking and imaging show that the LADAR system has little capability at a 50km range yet the turbulence has little impact at a 3km range.

  6. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  7. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  8. Measurement of polarization parameters of the targets in synthetic aperture imaging LADAR

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Sun, Jianfeng; Lu, Wei; Hou, Peipei; Ma, Xiaoping; Lu, Zhiyong; Sun, Zhiwei; Liu, Liren

    2015-09-01

    In Synthetic aperture imaging ladar (SAIL), the polarization state change of the backscattered light will affect the imaging. Polarization state of the reflected field is always determined by the interaction of the light and the materials on the target plane. The Stokes parameters, which can provide the information on both light intensity and polarization state, are the ideal quantities for characterizing the above features. In this paper, a measurement system of the polarization characteristic for the SAIL target materials is designed. The measurement results are expected to be useful in target identification and recognition.

  9. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture. PMID:26164291

  10. 3-D HYDRODYNAMIC MODELING IN A GEOSPATIAL FRAMEWORK

    SciTech Connect

    Bollinger, J; Alfred Garrett, A; Larry Koffman, L; David Hayes, D

    2006-08-24

    3-D hydrodynamic models are used by the Savannah River National Laboratory (SRNL) to simulate the transport of thermal and radionuclide discharges in coastal estuary systems. Development of such models requires accurate bathymetry, coastline, and boundary condition data in conjunction with the ability to rapidly discretize model domains and interpolate the required geospatial data onto the domain. To facilitate rapid and accurate hydrodynamic model development, SRNL has developed a pre- and post-processor application in a geospatial framework to automate the creation of models using existing data. This automated capability allows development of very detailed models to maximize exploitation of available surface water radionuclide sample data and thermal imagery.